Please login to be able to save your searches and receive alerts for new content matching your search criteria.
To accurately guide surgical instruments during ophthalmic procedures, some necessary intraoperative depth perception is required, which standard surgical microscopes supply limitedly. Intraoperative optical coherence tomography (iOCT), combining optical coherence tomography (OCT) technology and surgical microscope, enables noninvasive, real-time and high-resolution cross-sectional imaging. Currently, though iOCT enables structural imaging, little research has been done on intraoperative angiography. In this work, we presented a swept-source intraoperative OCT angiography (SS-iOCTA) system based on a standard surgical microscope, which provides both structural and angiographic images. The feasibility of the proposed SS-iOCTA was confirmed through deep anterior lamellar keratoplasty (DALK) of ex vivo porcine eyes and blood perfusion imaging of in vivo rat cortex. High-resolution intraoperative feedback, including sub-surface structure and angiogram of biological tissue, can be visualized simultaneously with the SS-iOCTA system, which expand the surgeon’s capabilities and could be widely used in clinical surgery.
Eye tracking, or oculography, provides insight into where a person is looking. Recent advances in camera technology and machine learning have enabled prevalent devices like smart-phones to track gaze and visuo-motor behavior at near clinical-quality resolution. A critical gap in using oculography to diagnose visuo-motor dysfunction on a large scale is in the design of visual task paradigms, algorithms for diagnosis, and sufficiently large datasets. In this study, we used a 500 Hz infrared oculography dataset in healthy controls and patients with various neurological diseases causing visuo-motor abnormality due to eye movement disorder or vision loss. We used novel visuo-motor tasks involving rapid reading of 40 single-digit numbers per page and developed a machine learning algorithm for predicting disease state. We show that oculography data acquired while a person reads one page of 40 single-digit numbers (15-30 seconds duration) is predictive of of visuo-motor dysfunction (ROC-AUC = 0:973). Remarkably, we also find that short recordings of about 2.5 seconds (6-12× reduction in time) are sufficient for disease detection (ROC-AUC = 0:831). We identify which tasks are most informative for identifying visuo-motor dysfunction (those with the most visual crowding), and more specifically, which aspects of the task are most predictive (the recording segments where gaze moves vertically across lines). In addition to segregating disease and controls, our novel visuo-motor paradigms can discriminate among diseases impacting eye movement, diseases associated with vision loss, and healthy controls (81% accuracy compared with baseline of 33%).