I am not even qualified to talk about human intelligence, and as usual, I go beyond what I am not capable of doing – talk about Artificial Intelligence! Please bear with me.
It
was sometime in the late 1990s. We had a bespoke software package, integrated
across the various stages in the analysis and design of electrical power
transmission lines, including an iterative effort to minimize the weight of the
tower. It also sounded too good to be true, but for the most part, it did what
it claimed to do.
Except
one thing.
In
developing the geometry of the tower, the software required the engineer to
input the coordinates of two points and it would determine the intermediate coordinates
as per the number of divisions we asked the software to divide the line into.
The above sentence is a lot longer than what was needed to be input.
So,
for a heavy tower, we input the end points and asked the software to divide it
into three equal parts. It refused to do this, in this particular instance. It
gave unequal lengths (difference in the fourth decimal, but a difference
nonetheless); but, when we added the three lengths (within the software), the
total length came out to be exact, to the fourth decimal!
To
make things clear, imagine asking a computer to divide the line between 4m and
16m into four equal parts, you get the distances of the parts as 3.9998,
4.0000, 3.9999, and 4.0001. Add them up in your calculator and you get 15.9998
(honest, I checked it on my Casio fx-991ES calculator). But, when the computer
does the addition it comes up with the exact 12.0000. The engineers guessed that
these could be some rounding of errors that are in the fifth or sixth decimal
places. I as their boss said OK, as I was also aware that the results of the
analysis would not change. But, my superior, the General Manager (one of my
friends says that General Managers are those who just manage generally!) would
not let go.
Similar
thing happened a few years later while preparing the Financial Proposal for a
major project. The last column in a row was the sum of all the cells in that
row; likewise, the last row in a column was the sum of all the cells in that
column. Now, the sum of all the numbers in the last row must match the sum of all
the numbers in the last columns – elementary my dear boss!
“No,”
the boss shouted, “... there is a difference of 2 paise (in a sum more than Rs.
1 crore) between the two.” We were asked to figure it out and we simply put up
our hands. The looming deadline for submission helped us escape our boss.
Now,
I come to my current employment. Entering marks for students in their tests (3
per semester and the best two scores will be considered). We need to score the
answer sheets for 50 marks and reduce it to for 20 marks and enter into the bespoke
ERP.
The
first time around, things are smooth – whatever we enter, say 17.0 (42.5/50),
comes out neatly as 17.0; however, in the second round, something strange
happens.
Let
me say, the student scored 39/50 the second time round. Then, I enter 15.6.
Then, the sum of these two scores comes by the side of the number I entered –
and it is, instead of a simple 32.6, 32.6000002! I never asked for seven digit inaccuracy!
I
cannot figure this out. The only thing I know is this is an infectious disease
for the computer and there may never be a vaccine against it. Imagine, from
late 1990s to the year 2020! A non-mutating, stable virus, but beyond.
The
first two times I encountered this virus, I asked experts in computing for an
explanation. No dice. The third time, I just resigned myself to this, accepting
the inaccurate calculation.
The
computer itself might know what is
happening but the so-called experts definitely do not.
Now,
I read a book titled “Computers Ltd.” With the strap line “What they really can’t
do”, authored by David Harel, supposedly a big name in mathematics and computer
science, the real thing, beyond coding. The title of this post is a straight
lift from that book.
I
read the book in the year 2002 and I can admit that I understood very little of
it, but did recognize correctly or wrongly, that it parallels things I have
read in another book, by Roger Penrose titled, “The Emperor’s New Mind”. I read
the whole book by Harel again – finished reading it today – and I am more
confident that the parallels I discerned the first time round are not that
wrong.
Towards
the end of Harel’s there is a brief on Artificial Intelligence (AI). The book,
written in 2002 remember, is not too sanguine about AI in an all-encompassing
manner. The author appreciates what computer scientists and mathematicians are
doing in this field, wishes them good luck, acknowledges what AI has made
possible, but is not too enamoured of it. I do not know what he would say in
2020. I give him the benefit of doubt.
But,
I would ask one thing of Harel and those involved in AI – can AI ever enable “not
remembering”? That is, can AI ever honestly say, “I do not remember.”?
I
am not sure. What I understood from Harel’s book is that AI would need logic switches
at warp speed and it should have facility for storage, which is almost
unlimited. This is what the fields of physics, electronics, communication, computer
science are focusing on. If you run out of memory, just build another
gargantuan data-centre with thousands of computers interconnected so intensely
that today’s networks will look positively primitive. Harel also indicates the
possibility of quantum computers coming to help us in the search for speed and
more speed.
Having
taken care of the basis of Harel’s pessimism, circa turn of the century, my
analysis says that AI will be unable to ever “forget” anything. Even if someone
were to incorporate such “forgetfulness”, we are just one step farther away
from my thesis. What if AI forgets “forgetfulness”? A conundrum, right?
Therefore,
if AI does achieve whatever its proponents hope for, they cannot hope for “forgetfulness”,
or if they do incorporate it, they cannot guarantee that AI would lose memory on “forgetfulness”.
I
rest my case.
Raghuram
Ekambaram
No comments:
Post a Comment