top of page

Decoding The Old With The New: Is That Progress Or Distortion?

Ancient literature, historical and religious manuscripts are believed by folks to be wordy time machines to unravel the hidden mysteries of mankind, whether evolutionary or philosophical. These ancient treasures have survived, or have been sustained only by techniques of interpretation, linguistic intervention, traditional oral narration, and the obvious growth of the printing press. From Greek, Roman, and Geco mythology to Indian and East Asian mythology, each carries “the world juice” to decode mankind and its struggles and passions, but very few have been successfully translated/transliterated.

It is almost impossible to deny the significance of this section of literature because of its sense of timelessness, the morals, and the inevitable relatability that stands true till this day. From bedtime story morals to religious teachings, we extract our example of idealism and/or the truth from these texts. However, accurate decoding and interpretation has been the biggest barrier in opening our minds up to the legends of the past millenniums. From the Egyptian hieroglyphs, Sumerian, the Biblical Hebrew, Latin to the more recently dying language of Sanskrit, all explain that with the tides of time we, as a race, may not know what we once discovered, for it has been forgotten, misplaced, or misinterpreted.


In the technologically progressive, modern era, ‘not so human’ geniuses have arrived at the scene, planted by our very own race, to decode and better understand how our spiritual, scientific, mythological, and religious past looked like. As you guessed, it’s: Artificial Intelligence- a sense of convenience where man-made machines (computers) perform cognitive tasks in patterns and ways similar to the fascinating human brain. From speech recognition to visual perception and decision making, computers carry out the protocols of human intelligence. A clone of your genius, a physical manifestation of your intellect, a brain outside your brain is what I fancy to call it.

It is key to understand that artificial intelligence can only be as convenient, as progressive or as all-knowing as the real human data, perceptions, and information that have been fed to it, quite literally. The growth of AI depends upon the steady encoding process of real human insight that can be further decoded at the drop of a hat. Hence, it is essential to know what exactly is being fed into such softwares or systems so as to ensure accurate results upon use. A relatively simple example for a technologically ‘not-so-sound’ explorer is language translations- when I ask Google for the ‘Hindi’ substitute of the English word ‘Intelligence’- Google comes back to me with the appropriate translation in most cases, and also brags a little bit about its list of synonyms and/or the etymological history of ‘intelligence’ coming from the dead Latin language. The vocabulary, pronunciations, and the scripts of both languages must be fed into the software for it to serve us instantly with an answer. It’s as simple as teaching a child the alphabets of a language but also as complex as the Vernam Code from World War II.

To reiterate, the decoding of the old, dead yet vivid literature poses a sufficient linguistic and interpretation barrier to the new clone of human intelligence: AI.


In one of the most recent breakthroughs, researchers at Notre Dame University have reportedly been developing an artificial neural network to decipher complex ancient handwritings and iconography to better comprehend to the unknown historical accounts of mankind as well as enhance the abilities of deep learning transcription; an AI method that recognises the patterns of how humans gather knowledge and further converts this into real consumable content. They suggest that they are looking to revive documents and texts whose languages have been long dead and are rarely spoken/comprehended, much like Latin and literature which haveve been safekept in museums, libraries, and monasteries across the world.


Walter Scheirer (Professor at the Department of Comp. Science and Engineering at NDU) explained how their team of researchers have taken up the task of automated transcription of these materials using a method that can imitate the perception of a particular page through the eyes of an expert reader in order to provide a quick reading of the text. An artificial neural network (a computer system modeled to replicate the human nervous system) is being developed to reach maximum accuracy in the deciphering of these materials with the help of traditional machine learning technologies and visual psychophysics- physical cues of mental phenomena as steady reactions to a set of written stimuli.

This process was carried out with the handwritten Latin manuscripts of the Cloister of St. Gall in the 9th Century. This method employed the digitization of these manuscripts followed by the manual feeding of expert readers’ meaning/ understanding/ translation of the manuscript, which in turn was monitored to understand how easy or difficult it was to comprehend. The researchers vouch that this technique aligns well with actual human behavior more than any other form of machine learning does and also reduces errors. This breakthrough promises a searchable reading of such previously unavailable texts.


However, they do face challenges when it comes to incomplete or lost parts of various documents or when they encounter symbols / characters / illustrations that are entirely unique like in the Egyptian texts. Another posed problem is a limited set of experts that can be the human brain behind such transcription and translation. What’s unique here is the non-traditional machine learning approach of labeling data through psychophysical parameters that are showcased by the readers. This is definitely an intriguing headway to the field of humanities but certain questions do remain.


In my opinion, the credibility and objectivity of this process is a major concern. Are the readers, no matter how distinguished, objective enough while reading these texts? Is their perception accurate, or like in most cases, just a version of their own? Is the usage of their faculties objective enough to be labeled ‘true’? Are these psychophysical cues accurate and if so, are they being interpreted via machine learning correctly? This leads to the same point of timely corrosion of such manuscripts, texts, and documents. Human perception is as unique and intricate as one’s thumb impression and no two perceptions are treated alike. Various natural biases on part of the reader as well as barriers to the AI technology can lead to a distorted version of the past literature, thereby leading to a misguided view of our history. However, as a gateway to our own past, this is a laudable effort and the abovementioned limitations can only be quashed with the help of further outputs. This attempt to create searchable readings of lost text makes the field wide open for common audiences to relish the truth, the stories, and the glories of the past.


Finally, I must leave you with a question: is decoding the old with the new making way for a distorted view of our history? Worth contemplating, isn’t it?


Comments


bottom of page