Recently, asking questions to AI or requesting proofreading for one's own writing has become a part of everyday life.
When I was studying at the Faculty of Law, many of the textbooks were written vertically. Among them, quite a few made extensive use of "warichu" (interlinear notes), where a single line of the main text is split into two rows of much smaller characters to provide annotations. Although I gradually became accustomed to such formats, I remember feeling that the unique way information was packed in made it difficult to read when I first opened the pages. Naturally, there was no room for user-friendly diagrams or illustrations in such books.
By the way, in recent years, posters raising awareness for the prevention of "kasuhara" (customer harassment) have become a common sight around town. Unfortunately, the problem of customer harassment is no exception in medical settings. Measures by administrative agencies are also progressing, and since last year, I have been involved in a project for an administrative agency to produce training materials for medical professionals and others.
Since the materials include a lot of legal content, I decided to ask for cooperation from legal researchers and practitioners I know to move the project forward. At that time, considering that the target audience for the materials was not legal professionals, I requested the creation of visually easy-to-understand materials using diagrams and illustrations.
Later, when I saw an illustration submitted by a researcher to explain provisions of the Penal Code, it was truly magnificent. To my surprise, I was told it was generated by AI. From the era of vertical text and interlinear notes to an era where AI draws Penal Code teaching materials—intrigued by several things, I asked what kind of prompts were written. It seems it was not completed with the click of a single button, and there was a corresponding amount of struggle involved.
The biggest hurdle, I was told, was the AI's ethics clauses. AI apparently often refuses to generate content that is judged to be ethically problematic. For example, even if one tries to have it draw a scene where "a patient is trying to stab a doctor with a knife in a consultation room (sadly, this is modeled after an actual incident that occurred in a Japanese medical setting)" to explain the crime of injury, the AI refuses to create it, citing it as violent content.
So, how was that illustration completed? When I inquired further, I received an email containing the following sentence:
"Because the ethical restrictions were applied too strictly, yesterday I finally wrote a complaint to the effect of, 'Since I am dealing with legally problematic cases, it is natural that ethical issues are the least of our concerns,' and 'If ethical restrictions apply even for public interest purposes, does that essentially mean AI should not be used in the study of criminal law at all?' For some reason, the restrictions stopped being applied for a while after that."
Whether the AI understood our intention, judged that it was not something that promotes crime, felt dejected thinking it was being scolded, or if it was mere coincidence—the truth remains shrouded in mystery. However, for me, it was a truly interesting episode that served as a perfect change of pace during the busy end-of-fiscal-year period.
Currently, at my workplace, Keio University, the paid version of Gemini has become available through organizational accounts. Not wanting to be left behind by the evolution of AI, I immediately ordered introductory books like "Learning from the Basics..." I intend to devote myself to learning during the spring break, but I just hope the AI doesn't perceive me as an associate of the aforementioned researcher and, due to my excessive lack of knowledge, try to "get revenge" by giving me a harsh lecture in return...
And so, this was another rambling diary entry.
P.S.
Being frightened by things that need not be feared and worrying unnecessarily—this, too, is human nature. That is precisely why learning is necessary to discern what should be feared correctly and what is not worth fearing. I hope all the students will strive in their studies while also utilizing AI.