Please login to be able to save your searches and receive alerts for new content matching your search criteria.
A standard-free method for hoof samples taken from cattle such as cow, calf, pony and sheep has been developed in order to estimate the state of health of these animals. The standard-free method developed for human nails was confirmed to be applicable to quantitative analysis of hoof samples since the shape of continuous X-rays is almost the same for nail and hoof taken from these ungulate animals. Accuracy and sensitivity of the present standard method were examined by comparing the results with those obtained by an internal-standard method combined with a chemical-ashing method, and it is confirmed that the method is applicable to hoof samples taken from domestic animals of many species. The method allows us to quantitatively analyze untreated hoof samples and to prepare the targets without complicated preparation technique which often brings ambiguous factors such as elemental loss from the sample and contamination of the sample during preparation procedure. It is also confirmed that halogens, which are important elements for estimating the state of health and are mostly lost during chemical-ashing, can be analyzed without problem by the present method. It is found that elemental concentration of more than twenty elements can be constantly analyzed and it is expected to be quite useful in order to estimate the state of health and to make diagnosis of domestic animals. It is also confirmed that elemental concentration of essential elements in hoof is not so changed depending on the positions in the sliced sample along both horizontal and vertical axis.
Lack of diagnosis coding is a barrier to leveraging veterinary notes for medical and public health research. Previous work is limited to develop specialized rule-based or customized supervised learning models to predict diagnosis coding, which is tedious and not easily transferable. In this work, we show that open-source large language models (LLMs) pretrained on general corpus can achieve reasonable performance in a zero-shot setting. Alpaca-7B can achieve a zero-shot F1 of 0.538 on CSU test data and 0.389 on PP test data, two standard benchmarks for coding from veterinary notes. Furthermore, with appropriate fine-tuning, the performance of LLMs can be substantially boosted, exceeding those of strong state-of-the-art supervised models. VetLLM, which is fine-tuned on Alpaca-7B using just 5000 veterinary notes, can achieve a F1 of 0.747 on CSU test data and 0.637 on PP test data. It is of note that our fine-tuning is data-efficient: using 200 notes can outperform supervised models trained with more than 100,000 notes. The findings demonstrate the great potential of leveraging LLMs for language processing tasks in medicine, and we advocate this new paradigm for processing clinical text.