英文摘要 |
With the rapid advance in Artificial Intelligence (AI) technology, more and more countries are embracing the concept of Smart Medicine with great enthusiasm. Several AI-based medical devices have been successfully approved by the Food and Drug Administrations and available on market. As people expecting that the application of AI can bring about more accurate diagnosis, better quality of care, and realization of personalized medicine, little attention is paid to the potential misuse and abuse of Smart medical device in altering the physicians-patient relationship, affecting patient safety, and eventually threating human subjectivity. Will the fictitious medical demand stimulated by wearable health-improving devices, in the name of“prevention”, increase the unnecessary burden on the healthcare system? Will the capability of real-time adaptive SaMD drive the physicians’duty of care to the“highest standard of care”? When AI devices outperform human doctors, how should we reconstruct the“duty of care”of physician-users to prevent physicians from becoming a“rubber stamp”of AI? If a physician makes a wrong diagnosis according to the recommendation by AI, who should be held responsible? This paper tries to answer all these questions through clinical observations, literature reviews and comparative study. The authors carefully analyze the regulatory characteristics of AI Medical Device, types of AI’s errors, and the relationship of“human-AI interaction”. We examine AI-related hard rules, regulations, and soft laws in two major jurisdictions: EU and the USA. We find that maintaining the autonomy and dignity of both physicians and patient is of critical importance in the age of smart medicine. |