JOBS FOR THE GIRLS

The development of computer software is a history strongly represented by women who have played significant rôles in its development. Ada Lovelace is the best known and Grace Hopper is also becoming a legend among the cognoscenti.   Less heralded by history was a group of six women who worked in wartime secrecy at the University of Pennsylvania, where John Mauchly and Presper Eckert led a team that was building ENIAC, the world’s first programmable, all-electronic, general-purpose computer.

 As ENIAC was being constructed at Penn in 1945, it was thought that it would perform a specific set of calculations over and over, such as determining a missile’s trajectory using different variables. But the end of the war meant that the machine was needed for many other types of calculations—sonic waves, weather patterns, and the explosive power of atom bombs—that would require it to be reprogrammed often.

This entailed switching around by hand ENIAC’s rat’s nest of cables and resetting its switches. At first the programming seemed to be a routine, perhaps even menial task, which may have been why it was relegated to women, who back then were not encouraged to become engineers. But what the women of ENIAC soon showed, and the men later came to understand, was that the programming of a computer could be just as significant as the design of its hardware.

The tale of Jean Jennings is illustrative of the early women computer programmers. She was born on a farm on the outskirts of Alanthus Grove, Maryville, into a family that had almost no money but deeply valued education. When Jean finished college in January 1945, her calculus teacher showed her a flier soliciting women mathematicians to work at the University of Pennsylvania, where women were working as “computers”—humans who performed routine maths tasks. 

One of the ads read:

Wanted: Women with Degrees in Mathematics…Women are being offered scientific and engineering jobs where formerly men were preferred. Now is the time to consider your job in science and engineering…You will find that the slogan there as elsewhere is ‘Women Wanted’.

 When Jennings started work at Penn in March 1945 there were approximately seventy other women at Pennsylvania working on desktop adding machines and scribbling numbers on huge sheets of paper.  A few months after she arrived, a memo was circulated among the women advertising six job openings to work on the mysterious machine that was behind locked doors on the first floor of Penn’s Moore School of Engineering, the ENIAC. She had no idea what the job was or what the ENIAC was, all she hoped was that she might be getting in on the ground floor of something new.  She believed in herself and wanted to do something more exciting than calculating trajectories.

When Jean Jennings got that job she was set to work together with Marlyn Wescoff, Ruth Lichterman, Betty Snyder, Frances Bilas, and Kay McNulty to figure out how the machine worked and then how to programme it.   They made careful diagrams and charts for each new configuration of cables and switches. What they were doing then was the beginning of a programme, though they did not yet have that word for it.

Around the same time that Grace Hopper was doing so at Harvard, the women of ENIAC were developing the use of subroutines. Because it was being used for atom bomb calculations and other classified tasks, ENIAC was kept secret until February 1946, when the Army and Penn scheduled a gala unveiling for the public and the press.  At the demonstration, ENIAC was able to spew out in 15 seconds a set of missile trajectory calculations that would have taken human computers several weeks. The women had programmed the ENIAC.  The unveiling of ENIAC made the front page of the New York Times under the headline ELECTRONIC COMPUTER FLASHES ANSWERS, MAY SPEED ENGINEERING.

Later Jennings complained, in the tradition of Ada Lovelace, that many of the newspaper reports overstated what ENIAC could do by calling it a giant brain and implying that it could think. The ENIAC wasn’t a brain in any sense, it couldn’t reason, as computers still cannot reason, but it could give people more data to use in reasoning.

That night there was a candlelit dinner at Pennsylvania’s venerable Houston Hall. It was filled with scientific luminaries, military brass, and most of the men who had worked on ENIAC. But Jean Jennings and Betty Snyder  were not there, nor were any of the other women programmers.

Shortly before she died in 2011, Jean Jennings reflected proudly on the fact that all the programmers who created the first general-purpose computer were women. It happened because a lot of women back then had studied maths, and their skills were in demand, she explained. There was also an irony involved, the boys with their toys thought that assembling the hardware was the most important task, and thus a man’s job. If the ENIAC’s administrators had known how crucial programming would be to the functioning of the electronic computer and how complex it would prove to be, they might have been more hesitant about giving such an important role to women.

Advertisements

QUEEN OF CODES

Ever since the days of Charles Babbage the engineering of computer hardware has been dominated by men. The pioneers of software, however, were often women, beginning with Babbage’s friend and muse Ada, Countess of Lovelace. 

A century later, when the first electronic computers were being invented, the men were still focusing on the hardware, and many women followed in Ada’s footsteps. You probably don’t know the name Grace Hopper, but she should be a household name.  As a rear admiral in the U.S. Navy, Hopper worked on the first computer, the Harvard Mark I and she headed the team that created the first compiler, which led to the creation of COBOL, a programming language that by the year 2000 accounted for 70 percent of all actively used code. Passing away in 1992, she left behind an inimitable legacy as a brilliant programmer and pioneering woman in male-dominated fields. 

Grace was curious as a child, a lifelong trait; at the age of seven she decided to determine how an alarm clock worked, and dismantled seven alarm clocks before her mother realized what she was doing (she was then limited to one clock.  She graduated from Vassar in 1928 with a bachelor’s degree in mathematics and physics and earned her master’s degree at Yale University in 1930.  In 1934, she earned a Ph.D. in mathematics from Yale and her thesis, New Types of Irreducibility Criteria, was published that same year. Hopper began teaching mathematics at Vassar in 1931, and was promoted to associate professor in 1941. 

Grace was enigmatic, disruptive and ahead of her times.  On December 7th 1941 after Pear Harbour was bombed by the Japanese in the Second World War she joined the navy.  As a former Maths lecturer she was put to work on the Harvard Mark I, the 51 foot maths calculating machine.  She loved machines and considered the Mark I a beautiful machine.  She was good at making machines work.  Not interested in the parts of a computer that “you could kick” she was fascinated by what later came to be called Programming. The input system used in the Mark I was paper tape, a system in which you could physically punch your code out in the tape that was fed into the machine. 

Grace Hopper helped find a way in which a ball could be made to collapse in on itself, this was called the implosion problem and the solution to this problem ultimately created the nuclear bomb which was later dropped on Hiroshima in Japan.  

After the war she became Head of the Software Division for Eckert and Mauchly Comp, where as Head of the Software Division she popularized the idea of machine-independent programming languages, which led to the development of COBOL, one of the first high-level programming languages. She is credited with popularizing the term ‘debugging’” for fixing computer glitches, inspired by an actual moth removed from the computer. 

Grace Hopper worked in the male dominated world of computers all her life and had no truck with people who called her a Trail Blazer.  She didn’t admit that any trail needed to be blazed saying that if you work hard and are capable then recognition would follow.  It must have amused her when she was voted Computer Man of the Year. 

Always an independent thinker she hated the expression “But we’ve always done it that way” and visitors to her office would be perplexed and fascinated in equal measure by a clock on her wall that went backwards, “there is no reason why a clock should work one way or another” she would reason.  Grace Hopper has been described as appearing to be “‘all Navy’, but when you reach inside you find a ‘Pirate’ dying to be released” and it may be this reason that a Jolly Roger flag was always flying in her office or to highlight her ability to release information from the most  secure hideouts. 

In 2014 eight thousand people attended the Grace Hopper Celebration of Women in Computing, and it was the world’s largest gathering of women technologists.  The George R. Brown Convention Centre, Houston Texas is the location for the 2015 Celebration and will be held from October 14th – 16th with more people expected to attend than ever before, her name may soon be recognised in ever more households.

AVATARS

November 2014

Yesterday I heard a very interesting radio 4 programme about Avatars.  Apparently the word Avatar was not conceived by a Hollywood film producer but comes from the Sanskrit word for ‘descent’.  It relates to when a deity manifests itself in an earthly embodiment.  In Christianity ‘incarnation’ describes the coming of the divine in bodily form to the world in which we inhabit.  Does this make Jesus an Avatar?  Some Hindu’s believe he was, along with Krishna and Rama, and the programme explored the parallels and distinctions between the two.

Also as new technologies offer the prospect of digital Avatars able to simulate our personalities in the online world after death, they discussed what such developments tell us about contemporary attitudes to life after death and immortality.

Millions of us interact with Avatars through computer games and online virtual worlds like ‘Second Life’ and it has become the buzz-word’ for a secular age. In a very subtle shift from the religious connotations of an Avatar being God taking human form to re-establish ways in which we can connect with him, to the contemporary meaning where we can be represented in a virtual environment through a simulacrum which can be considered the real us in a virtual existence in which we can live vicariously.

The logical progression of this will be creating our own Avatars and, the programme maintained, the technology will soon exist (estimated at within twenty years) to enable us to preserve our personalities and life stories, digitally.  It is not too far fetched, they said, for us soon to curate our own legacies which our children and grandchildren can access after our death so they will be able to react with us long after our own physical demise.

There are already 25, 000 people signed up to a library of clones site that promises to preserve their thoughts some time in the future.  At the moment this is just a matter of collecting information to store for when the time comes and robotic answers can be found to preserving their ‘real selves’.  So many questions arise from this prospect.  Is it actually desirable?  Who would ensure that these Avatars are authentic or just idolised personas? Who decides what part of our personalities are preserved?  And would this ‘break-through’ actually just perpetuate the grieving process preventing us from letting go of the dead?

Is it morally right to continue our existence beyond what it is supposed to be?  Death is important for life, because the fact of the finite time we have, forces us to make important decisions about what sorts of people we are here and now.  Death is not just extinction but an important boundary about what sort of person we want to be and forces us to behave and interact in a world that ensures we are those people.  If there was always a possibility that anything we physical did could be overwritten by this programme with the profile of an unfeasibly perfect person, who is to say some of us will not just cut ourselves off from the world and concentrate on fabricating a totally fictional character?

Moreover will we become scared of death, will we hide from it and immune ourselves to it?  Do Avatars, in fact, tranquilise us from the fact of death?  For me the question must be, what is in it for me?  And the answer can only be nothing, because even though our Avatars will contain our thoughts, personality and experiences, once we are dead will we not experience the relationship our loved ones are having with our Avatars, so what is the point?  I would much prefer to live my fallible life and let my friends and family remember me for the flawed human being I really am, and surely it would be better for them to come to terms with my death as quickly as possible and not prolong the parting with agonising conversations with what sounds like me but is in fact a simulacrum of me.  I will be far gone.

THE INHUMAN CONDITION

entire Cyborg

I have been thinking about cyborgs recently.  Are we that far from the science fiction notion of becoming half human, half machine? A pacemaker for my dodgy ticker, well that’s mainstream these days.  Maybe I’ll chuck out the glasses and get some new eyes that can automatically enhance the information  sent to my brain and adjust to different light qualities?  Or a chip that can hear colours transforming the frequency of light to that of sound and hear it through bone conduction.  A new hip perhaps, (a hip chip?) with a chip in it that makes it possible to ride a bike for longer periods over harsher terrain?  New hands to replace useless arthritic claws, strong enough to open the devilish packaging now mandatory for food producers, or impervious to extreme temperatures but delicate enough to enable me to incise into my etching plates?  At what stage do these enhancements change us from human to cyborg, or in fact make us posthuman, even transhuman?

The term cyborg is a little outdated as these days we can readily accept the intrusion of technology into our bodies, seeing those springy appendages of Oscar Pretorius as something to envy – only his legs mind you. The definition of a cyborg used to be a body which was dependant on something electronic or mechanical to exist, nowadays I cannot envision existing without my electronic devises, I feel diminished without them.

It seems that if the technology is good for the person and doesn’t threaten his/her humanity then it is acceptable but if it helps to make a person achieve beyond their human capabilities then this is perceived as threatening. Man as superman belongs in the comics or with Nietzsche, not our everyday lives.  The ethical choice is that I have a right to enhance and you have a right not to enhance.  It’s your body, your right to enhance as long as it doesn’t harm anyone, but there must be no coercion to enhance, or not to.

The tendency is to think that technology outside ourselves, when  it can be turned on or off,  is acceptable.  I find myself sometimes making cyborgian assumptions, like the phantom rumble in my pocket where the phone usually resides even when it’s not there. Trying to scroll down a paper page. Searching my brain like it was a Google Search. Trying to pinch zoom the view like it was a screen when I just wanted to see a sign somewhere far away.  So I suppose that most of us can be termed everyday cyborgs – when we wake up in the morning the first thing we do is check our mobiles, the device is the life-blood of our social reality, they symbiotically exchange information with the world.  We don’t notice the device, we achieve social union using technology. We meld into the computer and become part of it.

And then, if our bodies are beyond helping what about our brains?  Can our brains keep on living even after our body dies? Sounds like science fiction to me, but celebrated theoretical physicist Stephen Hawking recently suggested that technology could make it possible.  “I think the brain is like a programme in the mind, which is like a computer,” Hawking said, “so it’s theoretically possible to copy the brain on to a computer and to provide a form of life after death.”

Some people are actively working to develop technology that would permit the migration of brain functions into a computer. Russian multi-millionaire Dmitry Itskov, for one, hopes someday to upload the contents of a brain into a life-like exoskeleton as part of his 2045 Initiative.  And a separate research group, called the Brain Preservation Foundation is working to develop a process to preserve the brain along with its memories, emotions and consciousness, called chemical fixation and plastic embedding, the process involves converting the brain into plastic, carving it up into tiny slices, and then reconstructing its three-dimensional structure in a computer. This then offers the possibility of a machine which is dependant on human consciousness to exist – a reversal of our initial definition of a cyborg yet no less valid or frightening.  I must ask, is this an attractive promise of possible life after death?

But are we not moving ever closer towards some kind of symbiotic relationship with the technological other?  The technological construct now enters the flesh in unprecedented degrees of intrusiveness and the nature of the human-technological interaction has shifted towards a blurring of the boundaries between genders, races and species, following the trend of the contemporary inhuman condition.  The technological other today – a mere assemblage of circuitry and feedback loops – functions in the realm of an egalitarian blurring of differences. Has cyborg tendencies inspired a philosophy that seeks to make us so superhuman that we will not die?

VISIBLE, ALL TOO VISIBLE

crossreferences7

The visible has been and still remains the principle human source of information about the world.  When we can see we can orientate ourselves.  Even perceptions coming from other senses are often translated into visual terms, when we say “I see” we really mean “I understand”, and the sensation of vertigo originates in the ear but is experienced as a visual spatial confusion.

Thanks to the visible we recognise space as the precondition for physical existence.  The visible brings the world to us.  But at the same time it reminds us ceaselessly that it is a world in which we risk to be lost.  The visible with its space also takes the world away from us.  Nothing is more two-faced.

The visible implies an eye, it is the stuff of the relation between seen and seer.  Yet the seer, when human, is conscious of what the eye cannot and will never see because of time and distance.  The visible both includes us because we can see and excludes us because we cannot be everywhere.  The visible consists of the seen which, even when it is threatening, confirms our existence, and of the unseen which defies that existence.  The desire to see something like the sun setting behind the horizon, the stars on a clear night or heat haze in the dessert,  has a deep ontological basis.

To this human ambiguity of the visible one then has to add the visual experience of absence, whereby we no longer see what we saw.  We face a disappearance.  And a struggle ensues to prevent what has disappeared, what has become invisible, falling into the negation of the unseen, defying our existence.  Thus, the visible produces faith in the reality of the invisible and provokes the development of an inner eye which retains and assembles and arranges, as if in an interior, as if what has been seen may be forever partly protected against an ambush of space, which is absence.

Both life itself and the visible owe their existence to light.  Neither the optical explanation of visual perception nor the evolutionist theory of the slow, hazardous development of the eye in response to the stimulus of light dissolve the enigma that at a certain moment appearances were revealed as appearances.

Theories and observations of visual perception have been the main source of inspiration for computer vision (also called machine vision, or computational vision). Special hardware structures and software algorithms provide machines with the capability to interpret the images coming from a camera or a sensor. Artificial Visual Perception has long been used in the computer industry and is now entering the domains of automotive and robotics.

Areas of artificial intelligence deal with autonomous planning or deliberation for robotical systems to navigate through an environment. A detailed understanding of these environments is required to navigate through them. Information about the environment could be provided by a computer vision system, acting as a vision sensor and providing high-level information about the environment and the robot.

Artificial intelligence and computer vision share other topics such as pattern recognition and learning techniques. Consequently, computer vision is sometimes seen as a part of the artificial intelligence field or the computer science field in general.

LOOMS AND LAPTOPS

Old Looms

The loom was the first piece of automated machinery.  It was basically a simple system although it looks really complicated. There are horizontal rods, which connect with vertical rods with hooks. The horizontal rods interact with the punched cards which either have holes or un-perforated card (yes or no, on or off, one or zero, good or bad). If they move, then the vertical rod is moved. If the hook at the rod top is moved into the path of the griffe as it rises, then the hook is raised, and the thread is lifted. That creates the shed for the weft to pass through.

As a weaving system which withdrew control from human workers and transferred to the hardware of the machine, the Jacquard loom was bitterly opposed  by workers, who saw in this migration of control, a piece of their bodies literally transferred to the machine.  The Luddites opposed this automation and were supported in the House of Lords by the poet Lord Byron.

Charles Babbage, interested in the effects of automated machines on traditional forms of manufacture, published his research on the subject The Economies of Manufactures and Machinery in 1832.  He later said that looking back on the early factories was like seeing prototype ‘thinking machines’.

It was the Jacquard loom that excited and inspired Babbage (maker of the Difference Engine) who went on to build his Analytic engine, in which he was greatly helped by Ada Lovelace, the only legitimate daughter of previously mentioned Lord Byron. It was Ada who commented that if the Difference engine could simply add up, the Analytic Engine was capable of performing the whole of arithmetic.

Charles and Ada developed an intense relationship and in agreeing to write the footnotes to – and to translate from the Italian – Louis Menebrea’s Sketch of the Analytic Engine invented by Charles Babbage (1842) Ada produced the first example of what was later to be called ‘computer programming’.  The introduction of the principle which Jacquard devised for regulating his looms, the punched card, was copied by the pair to attain the varied and complicated processes required to fulfil the purposes of the Analytical Engine.

Old fashioned telephone exchange

Reality does not run along the neat straight lines of the printed page. Only by criss-crossing the complex topical landscape can the goals of multifacedness and the establishment of multiple connections begin to be attained.  Where there are a jumble of voices, ideas, and gossip, where there are people talking at the same time, where there is empathy and discourse, that’s where you‘ll find the real world of women.   The Internet shatters the myth that women are victims of technological change.  Weaving and typing, computing and telecommunicating, women have been tending the machinery of the digital age for generations, enjoying intimate relations with the techniques and technologies which are revolutionising the Western World today.

laptop and hands