Smartphones and Opthalmology

No comments

Dr Chrishan Gunasekera graduated from UCL Medical School in 2011. He is now an ophthalmologist involved in developing ways to use smartphones in eye examination and microsurgical training as well as an Honorary Senior Clinical Teaching Fellow at UCL. Here he shares with us his journey since graduating from UCLMS, and evaluates the medical technology he is currently working on.

RR: Can you talk us through your career progression and your current role?

CG: After finishing medical school, I completed my foundation training in London. I took a year out following FY2 and was a Clinical Teaching Fellow for the Medical School (at UCL). I would highly recommend this to anyone as it provides you with a year to focus on yourself and your interests both inside and outside of medicine. This year became a productive time for me as it allowed me to deliver teaching courses, take exams, create exams, develop video editing skills, and present internationally without the added extra pressure of training commitments.

At the moment I wear several hats. I am the Chief Resident for West Suffolk Hospital, which means I get to deliver a service improvement project that is heavily focused on medical technology. I work for the medical school as an Honorary Senior Clinical Teaching Fellow where I can occasionally deliver teaching sessions. I do this alongside my day job, which is an Ophthalmology Registrar in the Cambridge rotation.

In the future, I would like to get a consultant job that encompasses teaching, medicine and medical technology.

 

RR: What societies and extracurricular interests did you have during your time at medical school?

CG: I was part of a breakdancing crew at UCL Dance Society and I also ran the Wing Chun Kung Fu society. We performed on stage and even went travelling together, it was great! I highly recommend meeting people outside of the medical school.

 

RR: How did you get started working within medical technology?

CG: I was influenced by one of my mentors who saw that I had thought of solutions for a lot of problems, and who challenged me to implement this. One such solution was to develop a new way to perform Optical Coherence Tomography (OCT). I started to read more and found out how new medical devices are created. After attending some MedTech talks that take place at UCL and other universities, it made me realise a lot of the people behind these inventions knew an equal amount in their speciality as I did. I could then give myself permission to try something new. I continued experimenting and trying to fail as soon as possible to find out what was going to work and what was not.

RR: Can you tell us a bit about your work with direct ophthalmoscopy with an iPhone X?

CG: This was something that I stumbled across when I was creating a way to use smartphone imaging to supplement an eye examination. I realised that the camera and light source from an iPhone X, iPhone XS and XS Max were sufficiently close together to generate a red reflex at close distance. Parallel rays of light in a dilated pupil would come back to the camera and create an image much like a direct ophthalmoscope. The combined refractive power from the cornea and lens would then provide natural magnification to view the optic nerve.

To prove the concept, I went into a phone store with a dilated eye to try other models of phones, to see if I could replicate the findings. Surprisingly, I wasn’t questioned!

 

RR: What do you think the advantages and disadvantages of using smartphones to perform ophthalmoscopy are?

CG: The advantages are that the users can capture images and send these to the specialists in order to get a second opinion on a diagnosis that they are perhaps uncertain of. As you are also recording a video at the same time, all you need is to find one frame with a good view and when you are recording at 60 frames per second, there are plenty to choose from! The main disadvantage is that the patient is required to be dilated for this technique to work. The light on the smartphone camera is quite bright and causes pupil constriction, therefore dilation is a must. With a direct ophthalmoscope, in a dark room, some people can still visualise the optic nerve without dilation (as is taught in medical schools). There is also a learning curve to this technique, much like learning how to perform direct fundoscopy.

RR: How easy do you think it will be for clinicians to use this method in day to day practice?

CG: I imagine the real barrier to this technique would be access to an iPhone X and dilating eye drops. In my current hospital, I am setting up a small telemedicine trial where the A&E department will use an iPhone X to perform fundoscopy and securely send it to the Ophthalmologist on call allowing a remote diagnosis to be made.

 

RR: Do you believe that smartphones and everyday gadgets are being used with increasing frequency within medical practice?

CG: I think that there is an increasing role for gadgets and smartphones. Most recently, my current hospital has approved smartphone use within the hospital. This could essentially remove the old bleep system, allowing clinicians to take pictures and videos of patients and send them securely using end-to-end encryption. Large medical textbooks in clinics are now being replaced with Google searches. I think there will be more frequent uses of smartphones in healthcare providing that they do not interfere with other medical devices.

RR: Where do you think the future of medical technology is heading?

CG: At the moment, we are in the artificial intelligence revolution where deep learning is creating a number of opportunities in medicine to significantly help clinicians make a diagnosis.  Natural language processing and the ability for computers to become smarter could actually streamline the workflow for clinicians.

The NHS 10-year plan hints at a digital revolution. There is potential to help a lot of people, however this does come at a cost to already streamlined services. I do not think we have sat down properly to think about the ethics of using artificial intelligence in medicine. I would hate to think of patients getting a cancer diagnosis for the first time, to be given it via a smart device without knowing what the potential treatments are, what to do next, and having access to the psychological counselling that is recommended after receiving bad news.

RR: Do you have any projects that you are working on in the pipeline?

CG: One project I am currently working on is setting up a telemedicine service at West Suffolk Hospital using iPhones in ophthalmology. These are connected to a slit lamp and images are uploaded securely to the patient care records. The patient consent is taken on the smartphone itself and the patient signs with their finger making this compliant under the new data regulation rules.

I am also working on turning a smartphone into a microsurgical simulator with a heads-up display, streaming live onto a TV screen. This allows a real-time display of the operation, so that is not just the person who is in direct training who can view the procedure, but everyone in the room can view and benefit from it.

RR: How would you encourage current medical students to get involved in medical technology projects?

CG: There are numerous people that are engaged in medical technology projects in the hospital. When students are on their placements they can look for solutions to problems that are always there! If they can find a way to overcome them cheaply, safely and effectively – it is usually a winner. There are plenty of people who are engaging in MedTech research in UCL. Thankfully there are also people in the engineering and computer science department, who are attending UCL MedTech events. Their expertise can be helpful in implementing and delivering a project that you, as a medical student, could identify from your clinical placements. The Institute of Making allows you access to their 3D printer and using UCL’s access to a website like Lynda.com, you can be taught how to 3D print for free!

By Andrien Rajakumar and Aishah Ahmed