It will become harder and harder to distinguish in our tasks, whether military or civillian what part is done by an artificial intelligence and what part by the human mind. They will eventually become indivisible.
Real World Reasoning (REAL)
This technology is related to PAL and GALE already discussed earlier. Part of this real time thinking or reasoning is TASC Technologies for the Applications of Social Computing. DARPA of course is very interested in developing a scientific approach to predicting the actions of large masses of people. This research was instituted sometime in 2009. From a document titled, DARPA-SN-09-20, the following statement is read,
...the Defense Advanced Research Projects Agency (DARPA), Information Processing Techniques Office (IPTO), invites white paper responses from all qualified vendors interested in exploring the development of new technologies to rapidly create theoretically-informed, data-driven models of complex human, social, cultural, and behavioral dynamics that are instantiated in near-realtime simulations. These technologies would leverage the entire social science community and provide a rich test bed for establishing the empirical validity of alternative theories, and identifying gaps in knowledge that cannot be accounted for by the current body of social science theory. Other important technologies of interest include the formalization and semantic representation of social science theories, the semantic integration of disparate types of social science data, techniques for analyzing these data, and efficient computational techniques for rapid data processing. DARPA refers to this range of technologies as “Technologies for the Applications of Social Computing (TASC).” DARPA anticipates all these technologies would be integrated to develop a flexible, modular social simulation system that integrates sound social science theory with real world data, that facilitates a wide spectrum of military and intelligence applications, and that supports reliable, real-world decisions at multiple levels of analysis.
Coordination Decision Support Assistants (Coordinators) -
We cannot be certain what is meant by this, but it sounds to us like the Rapid Knowledge Formulation program (RFK). According to a Stanford University website is is concerned "...with the goal of allowing distributed teams of subject matter experts to quickly and easily build, maintain, and use knowledge bases."
This technology will interface with Honeywell's AugCog system described earlier. Interestingly enough, in the Dan Brown bestseller novel The Lost Symbol, there is a description of this integrated software combined with transfer learning and other learning technologies of DARPA. A CIA agent is doing research in a particular area when the intelligent learning software makes a connection with the work of a totally disconnected other CIA agent. We will let Dan Brown explain it.
Improving Warfighter Information Intake Under StressThe Agency was currently running a new piece of "collaborative integration" software designed to provide real-time alerts to disparate CIA departments when they happened to be processing related data fields. In an era of time-sensitive terrorist threats, the key to thwarting disaster was often as simple as a heads-up telling you that the guy down the hall was analyzing the very data you needed.RAPID KN
Under this heading we will include the research of Jeff Lewine Ph.D. a member of The Mind Research Network and associate professor at the University of Kansas Medical Center, department of neurology. In experiments conducted during video game battle simulations, 2 milliamps of electricity were sent into the solider's brain. Tests showed that soliders using the electrical charges showed twice as much improvement as those who did not. Under their heading of Neurosystems For National Security, it states,
The goal of NS2 [Neurosystems for National Security 2] is to translate high spatial and temporal resolution brain imaging, fMRI, MEG, and noninvasive brain stimulation into viable solutions for training soldiers and intelligence professionals to help them with real-time decision making and actions that avert injury and trauma. Noninvasive brain stimulation, specifically transcranial direct current stimulation (TDCS), is being used to attempt to influence the learning process, perhaps increasing the speed of learning or improving retention. TDCS utilizes scalp electrodes to deliver low amplitude direct currents to localized areas of the cerebral cortex (the superficial part of the brain), thereby modulating the level of excitability, or, put another way, increasing or decreasing the probability that neurons will talk to each other. “Even though TDCS has been applied to humans safely for decades, we are just beginning to learn how it helps to accelerate the learning process. Within the next couple of years, I expect great progress toward this goal,” says researcher Dr. Michael Weisend.Human-Assisted Neural Devices
Here again The Mind Research Network is involved modifying sleep cycles for soldiers in combat conditions.
MRN is also exploring the use of noninvasive brain stimulation to modify the sleep cycle. This is aimed at alleviating military sleep deprivation problems, as well as facilitating stress management in combat and after return to civilian life. While the long-term goal is to improve performance in realistic military applications, the research has broader implications. A number of studies suggest that noninvasive brain stimulation could be used therapeutically to treat a range of motor, cognitive and affective disorders including depression, schizophrenia, chronic pain, stroke, epilepsy and Parkinson’s disease. 2009 was a start-up year for the NS2 team, and among its successes has been the submission of a provisional patent application for brain stimulation as a treatment for neurological and psychiatric disorders. Additional goals for NS2 include developing ways to measure the biomarkers of trust and trustworthiness in several high-impact stressful environments.There is another technology called synthetic telepathy that Dr. Michael D'Zmura from University of California at Irvine's department of Cognitive Sciences has received $4 million grant from DARPA. The ultimate goal is for combat troops to be able to communicate with each other by reading each other's thoughts. The article goes on to state,
The brain-computer interface would use a noninvasive brain imaging technology like electroencephalography to let people communicate thoughts to each other. For example, a soldier would “think” a message to be transmitted and a computer-based speech recognition system would decode the EEG signals. The decoded thoughts, in essence translated brain waves, are transmitted using a system that points in the direction of the intended target. “Such a system would require extensive training for anyone using it to send and receive messages,” D’Zmura says. “Initially, communication would be based on a limited set of words or phrases that are recognized by the system; it would involve more complex language and speech as the technology is developed further.”The question as to how this technology would be either implanted or connected to the brain is more complicated. Wired Magazine ran an article titled "Army Yanks "Voice-To-Skull Devices" Site. Part of the description is quoted in this Wired article,
Nonlethal weapon which includes (1) a neuro-electromagnetic device which uses microwave transmission of sound into the skull of persons or animals by way of pulse-modulated microwave radiation; and (2) a silent sound device which can transmit sound into the skull of person or animals. NOTE: The sound modulation may be voice or audio subliminal messages. One application of V2K is use as an electronic scarecrow to frighten birds in the vicinity of airports.This technology has alarmed many, especially a group called Christians Against Mental Slavery. Who describe themselves as, "An international evangelical Christian group established in 2002, whose members wanted it to be regarded as a crime against humanity worldwide or anyone to monitor or to influence human thought technologically without continuing, informed consent."
click to enlarge
Another somewhat similar technology is much easier and could be implemented now. It is being developed by NASA and is called Subvocal Speech. The technology is already the size of a dime. Dr. Chuck Jorgensen at NASA Ames Research Center predicts it will soon get so small as to be almost invisible to the human eye. We include several videos for you to be able to visualize the technology. The volume in the first video is very low, while the volume for the second one is much louder. Please adjust the volume accordingly. If you cannot see the embedded video, here is the link: http://bit.ly/jdKy0l.
"We believe context-aware computing is poised to fundamentally change the way we relate to and react to devices. Future devices will constantly learn your habits, the way you go throughout your day. They'll understand your friends and how you're feeling. Maybe more importantly, they'll know where you're going and anticipate your needs."
One of the main goals of BCI (Brain Computer Interfaces) is to be able to control machines such as robots. Japan intentions to develop mind controlled robots by 2020. Dr. Jack Gallant, from the Henry H. Wheeler's Brain Imaging Center at the University of California Irvine is already able to read images that are being visualized in the brain using fMRI technology. It should be noted however, that fMRI technology has its limitations as pointed out by Professor Alard Roebroeck from Maastricht University in the Netheerlands, especially when relied on alone for predicting the behavior of the brain. There are two reasons for this. First because an fMRI machine can only sample the brain every one or two seconds, the brain does activity in milliseconds Thus, there is much the fMRI can miss. The second reason is that what the fMRI is sampling is only the hemodynamics (the movement of the blood) going on in the brain. This is not the same as looking at what is happening at the actual neuron level, which is where the most basic activity is happening. So making predictive models of how the brain will react in these areas is still far from settled.
It would not be difficult to see why DARPA might be very interested in this kind of research. We include a video from Daily Motion on this subject. If you cannot see the embedded video, here is the link: http://bit.ly/bi7IlK.
Japanese Mind Reading Technology by NTDWorldNews
Of course there is another approach - that taken by Intel. They have been heavily funding research at their research lab in Pittsburgh.
Neurotechnology for Intelligence Analysts
The official explanation for this technology on DARPA's website is this,
Current computer-based target detection capabilities cannot process large volumes of imagery with the speed, flexibility, and precision of the human visual system. Investigations of visual neuroscience mechanisms indicate that human brains are capable of responding visually much more quickly than they respond physically. The vision for DARPA's Neurotechnology for Intelligence Analysts (NIA) program is to revolutionize how analysts handle intelligence imagery, increasing throughput of imagery to an analyst and overall accuracy of assessments.Thus DARPa's aim is computers that can recognize images like the human mind can. As of 2008, three teams were participating in NIA's Phase 2 (Neurotechnology for Intelligence Analysts), Teledyne Scientific & Imaging, LLC , Columbia University using their visions algorithms and Honeywell International. The experiments were designed to have the computer work together with the human analysts' brain to help the identification of potential targets better. Columbia University was taking the following approach, "The system allowed the analyst to "jump" to regions of the imagery most likely to contain targets, based on the analyst's brain signals during previous viewing of the imagery segments." Honeywell's methods were also quite interesting,
The analyst’s brain is treated as a sensor: Electrical activity it produces is recorded from electrodes placed on the scalp, the same way electroencephalography (EEG) is used in hospitals to monitor brain activity. Then, when the analyst looks at one of the images flashing by, a scalp plot shows when there is increased brain activity. As images flash by, the analyst is asked to look for a target such as an airplane. After viewing about 50 of the smaller images (chips), he is asked if he saw an airplane—and he may answer “no.” But digital signal processing of the brain wave activity reveals that, in fact, he did see an airplane on slide 32. “This process allows us to do triage on large amounts of visual information we get from different soruces and improve an analyst’s ability to go through a large amount of imagery,” says Smith. In fact, the analyst can do the job 5-7 times faster using the triage system than unaided. This is because the triage system picks up brain waves showing recognition of a target even before the human analyst is cognizant he has spotted it. Smith says it is the equivalent of a person seeing something “out of the corner of his eye.”As an illustration of how these project overlap, when these sophisticated algorithms reach the new cyborg combat soldier is illustrated in this discussion by David Hughes from Aviation Week in 2008 with Bob Smith, vice president for advanced technology at Honeywell Aerospace.
Smith says Honeywell equipped infantry soldiers with brain and physiological sensors and monitored the soldiers during field training exercises at the Army’s Aberdeen Proving Ground. The brain sensors were an EEG and a “functional near-infrared” sensor to monitor activity in the frontal lobe. The physiological sensors included ones for the heart (electrocardiogram) and eyes. The data was used to determine workload, state of cognitive activity and the soldier’s level of attentiveness at a particular time. The point is that in combat, when a soldier is under stress and trying to take in too much information at one time, he can find himself in a situation where “tunnel vision” occurs. In a training situation, for example, the soldier may be looking for the enemy over a hill while being subjected to simulated fire. Then an explosion occurs nearby. Meanwhile, the platoon commander is yelling at the soldier to turn right and to move away from his current location. “But [the soldier] is focused on the enemy and is not hearing his commander due to information overload,” Smith says In the demonstration, it was shown that soldiers could be instrumented with a wireless computer to help them and their commander manage information overload. Knowing that a soldier is no longer absorbing additional data may suggest to the platoon leader that he shouldn’t give that person a key task during an attack.That is the goal towards which DARPA's research aims. In our nest part in this series we will discuss the latest attempts to expand the memory of the brain with computer chips.