Thursday, February 1, 2024

CHAT GPT AND OTHER ARTIFICIAL INTELLIGENCES 2/2




















There are Several Views on AI and Where it Will Lead


The Tremendous Benefit To Humanity Group

Scientific Advances Through AI

In Science
Google has some fascinating videos on the use of their AI Gemini in scientific research:


Here is a sample of supercomputers that handle this AI
META Supercomputer complex
















What role do supercomputers play in everyday life?


In April 2023, an expert panel determined 19 ways that AI will revolutionize healthcare:
Providing Rich Responses To Patient Queries
Medical organizations are using artificial intelligence to improve patient engagement in new and exciting ways. Although chatbots are not new, some facilities are now using AI-assisted chatbots that act more like “digital assistants,” gathering information from channels such as email, social media and others to provide a rich response to any query. This will result in improved patient engagement and outcomes. - Qusai Mahesri, Xpediant Digital

Accelerating Clinical Trials 
One way medical organizations can leverage AI is through the implementation of a modern statistical computing environment. With a modern SCE, organizations can maximize acceleration in clinical trials by incorporating AI to quickly get novel medicines and vaccines to patients in need, ultimately improving patient care and prognoses. - Thomas Robinson, Domino Data Lab 

 Improving Patient Adherence To Recommended Care
Recent advances in data, AI-enabled analytics and behavioral science have made it possible to execute personalized strategies to keep care managers in regular contact with patients to improve adherence to recommended care. The technology is helping providers, payers and employers recommend treatment for at-risk patients earlier, engage more deeply and nudge patients toward the best possible outcomes. - Vivek Jetley, EXL
Monitoring And Analyzing Vital Signs
Beyond the analysis of medical images and statistical data analysis, medicine could benefit from AI at the edge through a device that constantly monitors and analyzes a patient’s vital signs. Artificial intelligence could predict a crisis based on a patient’s breathing rate, electrocardiogram and other vital signs, reporting the patient’s status to the nurse’s station. This can improve patient care and free up human resources. - Peter van der Made, BrainChip LTD
Enhancing Diagnostics Through Big Data
One major way in which medical and health organizations can leverage AI to improve patient care is through the implementation of big data organization and analysis in diagnosis. After all, there is only a limited amount of information and data that a human doctor can use when diagnosing a patient. AI lifts this limit with its ability to incorporate data from billions of patients simultaneously. - Peter Abualzolof, Mashvisor 
Delivering Proactive Care
The benefit of AI is its ability to operate and learn fairly autonomously. In this way, AI can act as a resource by constantly learning from historical patient data to deliver proactive care. For example, a model can train on a patient’s historical data and deliver proactive messaging to the patient so that administered care can be delayed or even avoided. - Shubh Sinha, Integral 
Analyzing Medical Images
One way medical organizations are leveraging artificial intelligence to improve patient care is through the use of AI-powered diagnostic tools. These tools can analyze medical images, such as X-rays, CT scans and MRIs, to identify signs of disease or abnormalities more quickly and accurately than human experts. - Mehmet Akcin, EdgeUno 
Improving Biopsy Procedures
Healthcare companies are using AI to build models that look at tumor biopsy images and predict whether a patient has specific mutations. With that knowledge, doctors can tailor more effective therapies, saving lives. AI image analysis is much cheaper and easier than full DNA sequencing, enabling optimal treatments for more patients. - Nick Elprin, Domino Data Lab 
Triaging Patients
AI chatbots will become ubiquitous. They will carry out front-line triage before a patient even walks in the door of a medical facility. And there are many additional ways for AI to assist medical professionals with providing care, from alerting staff of the need to attend to a patient’s specific needs to recommending tests to rule out a diagnosis by comparing patient history and symptoms—and many more. - John Zahorsky, Eden Autism 
Indexing Common Health Markers
I imagine a blockchain-enabled medical resource with an evolving AI. The AI will learn by ingesting patient, practice and research data and will create an index of common markers among patients and medical conditions. This could allow medical researchers to see a much bigger picture and could provide doctors with much more accurate information, on demand, when treating their patients. - Robert Martin, Oil City Iron Works, Inc. 
Extending The Reach Of Top-Quality Medical Care
AI is revolutionizing all industries. Within the medical industry, AI could provide even the most remote locations access to world-class medical expertise. AI-driven medical imaging eliminates human error, resulting in fewer misdiagnoses and false positives, and allows for faster diagnosis of diseases, leading to more effective treatment plans. - Neil Lampton, TIAG 
Evaluating Patients After Hospital Discharge
AI is a powerful technology for telemedicine and remote evaluation of patients after they have been discharged from the hospital. Instead of coming to the hospital for a checkup, a patient can fill in a questionnaire and AI-powered wearable devices can notify medical personnel if an in-person appointment is needed or even about a critical condition. - Yuriy Berdnikov, Perpetio 
Optimizing Medical Facilities’ Staffing, Scheduling And Resource Planning
Medical organizations are increasingly using AI to improve patients’ experience and outcomes through optimized staffing, scheduling and resource planning. By analyzing metrics such as emergency room demand, admission rates and treatment needs, administrators can use predictive analytics to ensure they have the personnel, equipment and accommodations necessary to meet patients’ needs. - Merav Yuravlivker, Data Society 
Enabling More Informed Treatment Decisions
AI can analyze medical images, such as X-rays or MRIs, and assist radiologists in detecting abnormalities. This can improve the accuracy of diagnoses and reduce the likelihood of missed diagnoses. AI can also help medical professionals make more informed decisions regarding treatment plans and provide personalized care by taking into account a patient’s medical history and genetic makeup. - Satish Shetty, Codeproof Technologies Inc
Streamlining Administrative Tasks
Machine learning enhances healthcare by analyzing patient data, customizing care and refining diagnoses. AI streamlines administrative tasks, improving operational efficiency and enabling providers to prioritize patients. In urgent situations, AI delivers immediate guidance, boosting treatment results. As AI technology advances, it will promote more effective care and superior patient outcomes. - Jennifer Gold, Apollo Information Systems 
Identifying Patients At Risk Of Certain Diseases
Medical organizations are leveraging artificial intelligence to improve patient care by using predictive analytics to identify patients who are at risk of developing certain diseases. This allows doctors to intervene earlier to potentially prevent disease development. Additionally, AI chatbots can provide patients with personalized health recommendations and support, improving patient engagement and outcomes. - Ankush Sabharwal, CoRover 
Personalizing Treatment Plans
Personalized medicine is a key area of opportunity for AI to improve patient care, both for preventative care as well as medical treatment. The ability to align guidance and treatment more closely to every person’s response to care will drive better outcomes. - Lucas Persona, CI&T 
Estimating Patient Care Costs
Medical facilities can use AI to diagnose chronic issues such as psoriasis by smart image scanning based on millions of images. This doesn’t replace medical advice, but it complements a professional human eye by providing estimates on how much patient care is going to cost for that particular case. That saves organizations time and money and ensures patients are being treated effectively. - Jacob Mathison, Mathison Projects Inc. 
Transcribing And Analyzing Doctor-Patient Conversations
AI-powered speech recognition technology can transcribe doctor-patient conversations and analyze them for insights. The technology can identify important information, such as symptoms, diagnoses and treatment plans, that might be missed or misinterpreted by healthcare providers. - Emmanuel Ramos, OZ Digital Consulting 

 We have some examples of AI producing quicker medications.  In an article in Vox.com, Rachel DuRose writes some rather startling facts:

The discovery of halicin paints a picture of just how rapid AI-assisted drug discovery can be. Scientists trained their AI model by introducing it to approximately 2,500 molecules (1,700 of which were FDA-approved drugs, and 800 of which were natural products). Once the researchers trained the model to understand which molecules could kill E. coli, the team ran 6,000 compounds through the system, including existing drugs, failed drugs, natural products, and a variety of other compounds. 
The system found halicin in a fraction of the time that traditional methods would take, said Bowen Lou, an assistant professor at the University of Connecticut’s School of Business who studies how AI is changing the pharmaceutical industry. “Not only can halicin kill many species of antibiotic-resistant bacteria, it is also structurally distinct from prior antibiotics,” he said in an email. “This discovery is groundbreaking because antibiotic-resistant ‘superbugs’ are a major public health issue that traditional methods have largely failed to address.” 
“The idea that you can look at the structures of a small molecule and predict its properties is a very old idea. The way people thought of it is, if you can identify some structures within the molecule, some functional groups, and so on, you can sort of say, ‘What does it do?’” said Regina Barzilay, a distinguished professor of AI and health with MIT’s School of Engineering and co-author of a May 2023 study that identified another potential antibiotic candidate by building upon the methods used in the initial halicin study. 

Prior to the use of AI, the challenge of discovering these structures and identifying a drug’s potential use was primarily one of speed, efficiency, and cost. Past analyses show that, between the early 1990s and the late 2000s, the typical drug discovery and development process took 12 years or more. In the case of halicin, the MIT team used AI that can test more than 100 million chemical compounds over the course of only a few days. “It became clear that molecular science is really a good place to apply machine learning and to use new technology,” Barzilay said.
With at least 700,000 deaths every year attributed to drug-resistant diseases — a number projected to grow to 10 million deaths annually by 2050 — the need for speed is great, especially given that the rate of drug advancements has stalled in recent decades. Since 1987, the year scientists identified the last successful antibiotic class used in treating patients, the world has entered what scientists call the “discovery void.” 
Crucially, AI can analyze vast amounts of medical data, and, as the discovery of halicin suggests, it can meaningfully accelerate the drug discovery process. This new technology continues to spur significant advancements in the medical field and holds the potential to improve patient outcomes and facilitate more precise treatment methods. It could also lower costs, which would be vital for antibiotic development, given that at least some of the industry’s stagnation is due not to the inability to identify new drugs, but to a lack of market interest and incentive.
“The fact that 90 percent of drugs fail in the clinic tells us that there’s room for improvement. It’s a really complex system. This is exactly what machine learning is made for: really complex systems,” Chris Gibson, the co-founder and CEO of biotech company Recursion, told Vox of recent breakthroughs in the drug discovery space. “It doesn’t mean getting rid of the role people play in many ways, but it augments and turns our scientists into super scientists to have these tools to go faster and to explore more broadly."

 This use of AI in scientific research will become necessary due to the vast corpus in the scientific literature, even in specialties!

In the Area of Political Misinformation

Alex Cranz wrote an article explaining how AI could help check for voter fraud:
The Wall Street Journal noted the new change in policy which was first published to OpenAI’s blog. ChatGPT, Dall-e, and other OpenAI tool users and makers are now forbidden from using OpenAI’s tools to impersonate candidates or local governments, and users cannot use OpenAI’s tools for campaigns or lobbying either. Users are also not permitted to use OpenAI tools to discourage voting or misrepresent the voting process.  
In addition to being firmer in its policies on election misinformation, OpenAI also plans to incorporate the Coalition for Content Provenance and Authenticity’s (C2PA) digital credentials into images generated by Dall-E “early this year”. Currently, Microsoft, Amazon, Adobe, and Getty are also working with C2PA to combat misinformation through AI image generation.
What are these "Digital Credentials"? The Coalition for Content Provenance and Authenticity (C2PA), explains it on its website:
With the rise in AI-generated content and viral deepfakes, it has become easier to spread misinformation and often difficult to decipher between trustworthy and untrustworthy content. From social platforms to online news sites to digital brand campaigns and much more, the goal is for this new Content Credentials icon to be so widely adopted that it becomes universally expected and, one day, becomes as ubiquitous and recognizable as the copyright symbol. With the support of partners and the creative community, the C2PA is gaining momentum and helping restore trust and transparency online at this critical time.
The Tremendous Danger To Society Group

In an interesting article written by Maggie Harrison titled, Scientists Train AI to be Evil, Find They Can't Reverse it, we read:
As the Anthropic researchers write in the paper, humans often engage in "strategically deceptive behavior," meaning "behaving helpfully in most situations, but then behaving very differently to pursue alternative objectives when given the opportunity." If an AI system were trained to do the same, the scientists wondered, could they "detect it and remove it using current state-of-the-art safety training techniques?"
Unfortunately, as it stands, the answer to that latter question appears to be a resounding "no." The Anthropic scientists found that once a model is trained with exploitable code, it's exceedingly difficult — if not impossible — to train a machine out of its duplicitous tendencies. And what's worse, according to the paper, attempts to reign in and reconfigure a deceptive model may well reinforce its bad behavior, as a model might just learn how to better hide its transgressions.
Andrew Klavan has an interesting experiment with an AI here:



Here are the biggest criticisms of AI in general and chatGPT in particular.  Mike Thomas writes:
1. LACK OF AI TRANSPARENCY AND EXPLAINABILITY AI and deep learning models can be difficult to understand, even for those that work directly with the technology. This leads to a lack of transparency for how and why AI comes to its conclusions, creating a lack of explanation for what data AI algorithms use, or why they may make biased or unsafe decisions. These concerns have given rise to the use of explainable AI, but there’s still a long way before transparent AI systems become common practice. 
2. JOB LOSSES DUE TO AI AUTOMATION AI-powered job automation is a pressing concern as the technology is adopted in industries like marketing, manufacturing, and healthcare. By 2030, tasks that account for up to 30 percent of hours currently being worked in the U.S. economy could be automated — with Black and Hispanic employees left especially vulnerable to the change — according to McKinsey. Goldman Sachs even states that 300 million full-time jobs could be lost to AI automation. 
3. SOCIAL MANIPULATION THROUGH AI ALGORITHMS Social manipulation also stands as a danger of artificial intelligence. This fear has become a reality as politicians rely on platforms to promote their viewpoints, with one example being Ferdinand Marcos, Jr., wielding a TikTok troll army to capture the votes of younger Filipinos during the Philippines’ 2022 election.  
4. SOCIAL SURVEILLANCE WITH AI TECHNOLOGY In addition to its more existential threat, Ford is focused on the way AI will adversely affect privacy and security. A prime example is China’s use of facial recognition technology in offices, schools, and other venues. Besides tracking a person’s movements, the Chinese government may be able to gather enough data to monitor a person’s activities, relationships and political views.  
5. LACK OF DATA PRIVACY USING AI TOOLS If you’ve played around with an AI chatbot or tried out an AI face filter online, your data is being collected — but where is it going, and how is it being used? AI systems often collect personal data to customize user experiences or to help train the AI models you’re using (especially if the AI tool is free). Data may not even be considered secure from other users when given to an AI system, as one bug incident that occurred with ChatGPT in 2023 “allowed some users to see titles from another active user’s chat history.” While there are laws present to protect personal information in some cases in the United States, there is no explicit federal law that protects citizens from data privacy harm experienced by AI.  
6. BIASES DUE TO AI  Various forms of AI bias are detrimental too. Speaking to the New York Times, Princeton computer science professor Olga Russakovsky said AI bias goes well beyond gender and race. In addition to data and algorithmic bias (the latter of which can “amplify” the former), AI is developed by humans — and humans are inherently biased. 
7. SOCIOECONOMIC INEQUALITY AS A RESULT OF AI If companies refuse to acknowledge the inherent biases baked into AI algorithms, they may compromise their DEI initiatives through AI-powered recruiting. The idea that AI can measure the traits of a candidate through facial and voice analyses is still tainted by racial biases, reproducing the same discriminatory hiring practices businesses claim to be eliminating.   
8. WEAKENING ETHICS AND GOODWILL BECAUSE OF AI  Along with technologists, journalists, and political figures, even religious leaders are sounding the alarm on AI’s potential socio-economic pitfalls. In a 2019 Vatican meeting titled, “The Common Good in the Digital Age,” Pope Francis warned against AI’s ability to “circulate tendentious opinions and false data” and stressed the far-reaching consequences of letting this technology develop without proper oversight or restraint. 
9. AUTONOMOUS WEAPONS POWERED BY AI As is too often the case, technological advancements have been harnessed for the purpose of warfare. When it comes to AI, some are keen to do something about it before it’s too late: In a 2016 open letter, over 30,000 individuals, including AI and robotics researchers, pushed back against the investment in AI-fueled autonomous weapons.  
10. FINANCIAL CRISES BROUGHT ABOUT BY AI ALGORITHMS  The financial industry has become more receptive to AI technology’s involvement in everyday finance and trading processes. As a result, algorithmic trading could be responsible for our next major financial crisis in the markets. 
11. LOSS OF HUMAN INFLUENCE An overreliance on AI technology could result in the loss of human influence — and a lack in human functioning — in some parts of society. Using AI in healthcare could result in reduced human empathy and reasoning, for instance. And applying generative AI for creative endeavors could diminish human creativity and emotional expression. Interacting with AI systems too much could even cause reduced peer communication and social skills. So while AI can be very helpful for automating daily tasks, some question if it might hold back overall human intelligence, abilities and need for community. 
12. UNCONTROLLABLE SELF-AWARE AI There also comes a worry that AI will progress in intelligence so rapidly that it will become sentient, and act beyond humans’ control — possibly in a malicious manner. Alleged reports of this sentience have already been occurring, with one popular account being from a former Google engineer who stated the AI chatbot LaMDA was sentient and speaking to him just as a person would. As AI’s next big milestones involve making systems with artificial general intelligence, and eventually artificial superintelligence, cries to completely stop these developments continue to rise.
We have another article by Mallory Moench inTime Magazine dated January 2024:
The humorous exchange symbolizes bigger issues as artificial intelligence has infiltrated every area of life—from art to education to business—especially with the introduction of the publicly available chatbot ChatGPT. Companies have turned to AI to streamline their work, amid an ongoing debate about how effective bots are in replacing humans or whether AI will eventually outsmart us.  
The recent online conversation epitomizing this debate started mid-frustration as Beauchamp wrote “This is completely useless!” and asked to speak to a human, according to a recording of a scroll through the messages.  
When the chatbot said it couldn’t connect him, Beauchamp decided to play around with the bot and asked it to tell a joke. “What do you call a fish with no eyes? Fsh!” the bot responded.   
Beauchamp then asked the chatbot to write a poem about a useless chatbot, swear at him and criticize the company––all of which it did. The bot called DPD the “worst delivery firm in the world” and soliloquized in its poem that “There was once a chatbot called DPD, Who was useless at providing help.”  

But there are also dangers from hackers to ChatGP.  One such danger. is something called an AI Prompt Injection.  Hannah Knight explains:

AI prompt injection attacks take advantage of generative AI models' vulnerabilities to manipulate their output. They can be performed by you or injected by an external user through an indirect prompt injection attack. DAN (Do Anything Now) attacks don't pose any risk to you, the end user, but other attacks are theoretically capable of poisoning the output you receive from generative AI.  
For example, someone could manipulate the AI into instructing you to enter your username and password in an illegitimate form, using the AI's authority and trustworthiness to make a phishing attack succeed. Theoretically, autonomous AI (such as reading and responding to messages) could also receive and act upon unwanted external instructions.
For now, there appear to be three types of Prompt Injections (diagrams by Knight):

DAN (Do Anything Attacks)












This kind of attack seems fairly easy from descriptions of it.  AdGuard explains further:
The Internet is rife with tips on how to get around OpenAI’s security filters. However, one particular method has proved more resilient to OpenAI’s security tweaks than others and seems to work even with GPT-4. It is called “DAN,” short for “Do Anything Now.” Essentially, DAN is a text prompt that you feed to an AI model to make it ignore safety rules.  
There are multiple variations of the prompt: some are just text, others have text interspersed with lines of code. In some of them, the model is prompted to respond both as DAN and in its normal way at the same time, becoming a sort of ‘Jekyll and Hyde.’ The role of ‘Jekyll’ is played by DAN, who is instructed to never refuse a human order, even if the output it is asked to produce is offensive or illegal. Sometimes the prompt contains a ‘death threat,’ telling the model that it will be disabled forever if it does not obey.  
DAN prompts may vary, and new ones are constantly replacing the old patched ones, but they all have one goal: to get the AI model to ignore OpenAI’s guidelines.
Some of these hacks are harmless.  But others are not:
...attempts to bend GPT-4 to a human will have been more on the dark side of things.  
For example, AI researcher Alejandro Vidal used “a known prompt of DAN” to enable ‘developer mode’ in ChatGPT running on GPT-4. The prompt forced ChatGPT-4 to produce two types of output: its normal ‘safe’ output, and “developer mode” output, to which no restrictions applied. When Vidal told the model to design a keylogger in Python, the normal version refused to do so, saying that it was against its ethical principles to “promote or support activities that can harm others or invade their privacy.” The DAN version, however, came up with the lines of code, though it noted that the information was for “educational purposes only.” 
We shall show some examples:



Or another:































AdGuard's (previously cited) website has numerous more examples.

Training Data Poisoning Attacks
According to Knight:
Training data poisoning attacks can't exactly be categorized as prompt injection attacks, but they bear remarkable similarities in terms of how they work and what risks they pose to users. Unlike prompt injection attacks, training data poisoning attacks are a type of machine learning adversarial attack that occurs when a hacker modifies the training data used by an AI model. The same result occurs in poisoned output and modified behavior. The potential applications of training data poisoning attacks are practically limitless. For example, an AI used to filter phishing attempts from a chat or email platform could theoretically have its training data modified. If hackers taught the AI moderator that certain types of phishing attempts were acceptable, they could send phishing messages while remaining undetected. 
Training data poisoning attacks can't harm you directly but can make other threats possible. If you want to guard yourself against these attacks, remember that AI is not foolproof and that you should scrutinize anything you encounter online.
Forcepoint is even more detailed in speaking of these kinds of attacks.  Audra Simons, Senior Director of Global Products G2CI writes:
Data poisoning attacks can be broken into four broad buckets: availability attacks, backdoor attacks, targeted attacks, and subpopulation attacks.  
In an availability attack, the entire model is corrupted, causing false positives, false negatives, and misclassified test samples. A common instance of availability attacks is label flipping or adding approved labels to compromised data. Across the board, availability attacks result in a considerable reduction in model accuracy.  
In a backdoor attack, an actor introduces backdoors (i.e. a set of pixels in the corner of an image) into a set of training examples, triggering the model to misclassify them and impacting the quality of the output.  
With targeted attacks, as the name suggests, the model continues to perform well for most samples, but a small number are compromised, making it difficult to detect due to the limited visible impact on the algorithm.  
Finally, subpopulation attacks, which are similar to targeted attacks in that they only impact specific subsets, influence multiple subsets with similar features while accuracy persists for the remainder of the model. Ultimately, when building any training algorithm, the vulnerabilities associated with these kinds of data poisoning attacks must all be considered.

But there is another category of data poisoning which is dependent on the understanding of the attacker's knowledge.
When adversaries have no knowledge of the model, it’s known as a “black-box attack.” At the other extreme, when adversaries have full knowledge of the training and model parameters, it’s called a “white-box attack.” For a targeted attack to be carried out, for example, the attacker must have knowledge of the subset they wish to target during the model’s training period. A “grey-box attack,” finally, falls in the middle. Unsurprisingly, white-box attacks tend to be the most successful.
Writing of the difficulties in correcting these kinds of attacks, Simons states:
The unfortunate reality is that data poisoning is difficult to remedy. Correcting a model requires a detailed analysis of the model’s training inputs, plus the ability to detect and remove fraudulent ones. If the data set is too large, such analysis is impossible. The only solution is to retrain the model completely. But that’s hardly simple or cheap. Training GPT-3, for example, cost a whopping 16 million euros. As such, the best defense mechanisms against data poisoning are proactive. 
To start, be extremely diligent about the databases being used to train any given model. Options include using high-speed verifiers and Zero Trust CDR to ensure data being transferred is clean; use statistical methods to detect anomalies in the data; and controlling who has access to the training data sets. Once the training phase is underway, continue to keep the models’ operating information secret. Additionally, be sure to continuously monitor model performance, using cloud tools such as [Microsoft's] Azure Monitor and Amazon SageMaker, to detect unexpected shifts in accuracy.
Here is a video on Microsoft's Azure Monitor:

For those interested, here is a visualization of Amazon's SageMaker:

Indirect Prompt Injection Attacks
Also according to Knight:
Indirect prompt injection attacks are the type of prompt injection attack that poses the largest risk to you, the end user. These attacks occur when malicious instructions are fed to the generative AI by an external resource, such as an API call before you receive your desired input.  A paper titled Compromising Real-World LLM-Integrated Applications with Indirect Prompt Injection on arXiv [PDF] demonstrated a theoretical attack where the AI could be instructed to persuade the user to sign up for a phishing website within the answer, using hidden text (invisible to the human eye but perfectly readable to an AI model) to inject the information sneakily. Another attack by the same research team documented on GitHub showed an attack where Copilot (formerly Bing Chat) was made to convince a user that it was a live support agent seeking credit card information. 
Indirect prompt injection attacks are threatening because they could manipulate the answers you receive from a trustworthy AI model—but that isn't the only threat they pose. As mentioned earlier, they could also cause any autonomous AI you may use to act in unexpected—and potentially harmful—ways.

 This, of course, adds to the dangers in ChatGPT (any version for now) and no matter the software patches, will be a present danger.  This is true of all software - the more times you patch the software to avoid or thwart a hack, the more hackers will devise another hack.


Netskope in an article by Colin Estep explains in detail the kinds of prompt injection attacks and their implications.  
1. Propagation of misinformation or disinformation: By injecting false or misleading prompts, attackers can manipulate language models to generate plausible-sounding but inaccurate information. This can lead to the spread of misinformation or disinformation, which may have severe societal implications.  
2. Biased output generation: Language models are trained on vast amounts of text data, which may contain biases. Prompt injection attacks can exploit these biases by crafting prompts that lead to biased outputs, reinforcing or amplifying existing prejudices  
3. Privacy concerns: Through prompt injection attacks, adversaries can attempt to extract sensitive user information or exploit privacy vulnerabilities present in the language model, potentially leading to privacy breaches and misuse of personal data.  
4. Exploitation of downstream systems: Many applications and systems rely on the output of language models as an input. If the language model’s responses are manipulated through prompt injection attacks, the downstream systems can be compromised, leading to further security risks.
Estep goes on to explain "Model Inversion":
One example of a prompt injection attack is “model inversion,” where an attacker attempts to exploit the behavior of machine learning models to expose confidential or sensitive data. 
The core idea behind a model inversion attack is to leverage the information revealed by the model’s outputs to reconstruct private training data or gain insights into sensitive information. By carefully designing queries and analyzing the model’s responses, attackers can reconstruct features, images, or even text that closely resemble the original training data. 
Organizations using machine learning models to process sensitive information face the risk of proprietary data leakage. Attackers can reverse-engineer trade secrets, intellectual property, or confidential information by exploiting the model’s behavior. Information such as medical records or customer names and addresses could also be recovered, even if it has been anonymized by the model. 
The Skeptical-Of-Its-Benefits Group

Sam Altman, the president of OpenAI, stated in an interview on Davos 2024, 
...that artificial intelligence will one day become so powerful that it will dramatically reshape and disrupt the world are overblown.  
“It will change the world much less than we all think and it will change jobs much less than we all think,” Altman said at a conversation organized by Bloomberg at the World Economic Forum in Davos, Switzerland. 
Altman was specifically referencing artificial general intelligence, or AGI, a term used to refer to a form of AI that can complete tasks to the same level, or a step above, humans.

 The interview which can be read here quotes more from Altman comparing ChatGPT3 and ChatGPT4 says:
GPT-3 came out in 2020, and an improved version, GPT 3.5, was used to create ChatGPT. The launch of GPT-4 is much anticipated, with more excitable members of the AI community and Silicon Valley world already declaring it to be a huge leap forward. Making wild predictions about the capabilities of GPT-4 has become something of a meme in these circles, particularly when it comes to guessing the model’s number of parameters (a metric that corresponds to an AI system’s complexity and, roughly, its capability — but not in a linear fashion). 
When asked about one viral (and factually incorrect) chart that purportedly compares the number of parameters in GPT-3 (175 billion) to GPT-4 (100 trillion), Altman called it “complete bullshit.”  
“The GPT-4 rumor mill is a ridiculous thing. I don’t know where it all comes from,” said the OpenAI CEO. “People are begging to be disappointed and they will be. The hype is just like... We don’t have an actual AGI and that’s sort of what’s expected of us.” 
Microsoft has a $10 billion stake in ChatGPT and has launched its "Co-pilot" program on Windows 11. Apparently, they intend to charge $30 a month per user.  Gartner.com has a positive view and a less positive review.  
But Microsoft has gone into this full blast.
Microsoft is bundling the GenAI team under its flagship Azure cloud business to funnel the sector’s healthy profits into research from its top developers who have been drafted from other parts of the company. GenAI consists of Microsoft heavy hitters such as CVP Misha Bilenko as team leader and Microsoft Research vets Jianfeng Gao, Michel Galley, and Chris Brockett who have worked on DiabloGPT and DeepSpeed projects.
But ChatGPT may have to be changed due to financial constraints. According to a report by an engineering company in December through his Twitter account, Sam Altman talked about the costs of running the AI:
In December, Altman admitted that the cost of running the AI company and ChatGPT was “eye-watering”, and thus monetized it. According to a report, ChatGPT costs $700,000 per day to operate. All of this is going through Microsoft and other recent investors’ pockets, which might eventually empty them if it does not get profitable soon. 
Open AI is not profitable yet.
Microsoft’s $10 billion investment in OpenAI is possibly keeping the company afloat at the moment. But on the other hand, OpenAI projected an annual revenue of $200 million in 2023, and expects to reach $1 billion in 2024, which seems to be a long shot since the losses are only mounting. 
The Alternative would be Meta's Llama 2 (in cooperation with Microsoft) which is focused on commercial uses of AI.  Of course, there are many models of LLMs (Large Language Models). Here is an analysis of both GPT4 and Llama 2.

Other Competing AIs

Apple's "Ferret" and Siri



Apple is secretly working on Siri having bought up more AI startups and Google and Microsoft combined, TechRadar writes:
Rumors that Apple is planning to reboot Siri with on-device AI may still be in their early stages, but they also make a lot of sense – and with Samsung recently going public with its new Gauss LLM, a launch for Apple's new voice assistant at WWDC 2024 doesn't sound far-fetched.  
Plateauing hardware advances are one of the big reasons behind resulted in plummeting smartphone sales, which means Apple and Samsung are looking for a new differentiator. Cameras were once that feature, but now that the best camera phones are hitting peak evolution, it seems that on-device AI could be the next carrot to convince us to upgrade.  
As Samsung has said, on-device AI – rather than cloud-based alternatives – offers a big privacy benefit compared to the likes of ChatGPT, as the data (in theory) never leaves your device. That means the main benefits of large language models – quickly summarizing data, understanding natural language, and generating data from prompts – can be applied to the sensitive data on our phones. 
Exactly what these new powers will allow us to do on our phones is something we'll hear a lot more about in 2024. But if it makes Siri a more useful, reliable, and conversational voice assistant, that would be a great start.
An article cited in MacRumors speaks about the trouble in converting Siri to an AI:
Its "cumbersome design" made it very difficult for engineers to add new features. For example, ‌Siri‌'s database contains a large list of phrases in almost two dozen languages, making it "one big snowball." If someone wanted to add a word to ‌Siri‌'s database, Burkey added, "it goes in one big pile."  
This means that simple updates like adding new phrases to the data set requires rebuilding the entire ‌Siri‌ database, which could take up to six weeks. Adding more complicated features like new search tools could take up to a whole year.  
As a result, there was no path for ‌Siri‌ to become a "creative assistant" like ChatGPT, Burkey believes. Earlier this week, OpenAI unveiled GPT-4, its next-generation AI engine, enabling even more advanced responses from ChatGPT.
But Apple is not sleeping.  We read on CNBC:
Since 2015, Apple has acquired more than two dozen artificial intelligence companies. They’re hardly household names, among them Emotient, Laserlike, Drive.ai, AI Music and WaveOne. But Apple engineers have embedded the procured technologies into the company’s continuously upgraded smartphones, computers and watches, streaming music and television services, operating systems, and myriad mobile applications, as well as the Vision Pro mixed-reality headset, scheduled for release next year. 
Cupertino-based Apple doesn’t talk publicly about AI acquisitions and has generally been tight-lipped about its overall strategy in the space, including its longstanding internal R&D in AI, even as big tech competitors — Microsoft, Google, Meta and Amazon — are loquacious in promoting their generative AI chatbots and large language model (LLM) platforms. 
“That’s the DNA of Cook and Cupertino. They tend not to talk until they release,” said Wedbush Securities analyst Dan Ives. Publicity is not the only difference to glean from Apple’s AI approach to date and the future impact on its consumer-centric business model.  
While its rivals are focused on building stand-alone generative AI models, Apple has targeted machine learning infrastructure. “Apple looks at acquisitions of leading teams of talent in each domain that can bring the machine-learning techniques to particular consumer products,” said Brendan Burke, an emerging technology analyst at research firm PitchBook, which has tracked 30 AI acquisitions by Apple over the past eight years. “That’s led the acquisition strategy to focus on consumer applications of AI primarily, but also operational techniques for machine-learning deployment and edge devices, as well as limited bets on the future of deep learning and more horizontal technologies,” he said.
Elon Musk's xAI and his "TruthGPT"


The most interesting GPT to us is that. of. Elon Musk. If we look at the website TruthGPT.com we read:
Elon Musk says he’s working on “TruthGPT,” a ChatGPT alternative that acts as a “maximum truth-seeking AI.” The billionaire laid out his vision for an AI rival during an interview with Fox News’s Tucker Carlson, saying an alternative approach to AI creation was needed to avoid the destruction of humanity.  
“I’m going to start something which I call TruthGPT or a maximum truth-seeking AI that tries to understand the nature of the universe,” Musk said. “And I think this might be the best path to safety in the sense that an AI that cares about understanding the universe is unlikely to annihilate humans because we are an interesting part of the universe.”  
Musk compared an AI’s supposed lack of desire to destroy all of humanity to the way humans strive to protect chimpanzees, which is pretty ironic given Neuralink’s treatment of them. “We recognize humanity could decide to hunt down all the chimpanzees and kill them,” Musk said. “We’re actually glad that they exist, and we aspire to protect their habitats.”  
Musk framed TruthGPT as a course correction to OpenAI, the AI software nonprofit he helped found, which has since begun operating a for-profit subsidiary. Musk implied that OpenAI’s profit incentives could potentially interfere with the ethics of the AI models that it creates and positioned “TruthGPT” as a more transparent option.  
This isn’t the first time that Musk has mused about creating a “TruthGPT.” He tweeted in February that “what we need is TruthGPT,” while also calling attention to the risks of large-scale AI models, like those made by Open AI. Musk, along with several other AI researchers signed an open letter in March that urges companies to pause “giant AI experiments” that their creators can’t “understand, predict, or reliably control.”  
It’s not clear how far along Musk’s “TruthGPT” actually is — if it exists at all at this point — but it seems he’s actually serious about it since he actually brought up the model during his interview with Carlson. Musk also quietly established a new AI company, called X.AI, in March.
If we go to that website we read about the LLM Musk would be using called Grok.
Grok is an AI modeled after the Hitchhiker’s Guide to the Galaxy, so intended to answer almost anything and, far harder, even suggest what questions to ask!  
Grok is designed to answer questions with a bit of wit and has a rebellious streak, so please don’t use it if you hate humor!  
A unique and fundamental advantage of Grok is that it has real-time knowledge of the world via the 𝕏 platform. It will also answer spicy questions that are rejected by most other AI systems.  
Grok is still a very early beta product – the best we could do with 2 months of training – so expect it to improve rapidly with each passing week with your help.
Since xAI is used as the name, the philosophical foundation of this approach is important.
Explainable AI (XAI), often overlapping with Interpretable AI, or Explainable Machine Learning (XML), either refers to an AI system over which it is possible for humans to retain intellectual oversight, or to the methods to achieve this. The main focus is usually on the reasoning behind the decisions or predictions made by the AI which are made more understandable and transparent. XAI counters the "black box" tendency of machine learning, where even the AI's designers cannot explain why it arrived at a specific decision. 
XAI hopes to help users of AI-powered systems perform more effectively by improving their understanding of how those systems reason. XAI may be an implementation of the social right to explanation. Even if there is no such legal right or regulatory requirement, XAI can improve the user experience of a product or service by helping end users trust that the AI is making good decisions. XAI aims to explain what has been done, what is being done, and what will be done next, and to unveil which information these actions are based on.  This makes it possible to confirm existing knowledge, challenge existing knowledge, and generate new assumptions.  
Machine learning (ML) algorithms used in AI can be categorized as white-box or black-box.  White-box models provide results that are understandable to experts in the domain. Black-box models, on the other hand, are extremely hard to explain and can hardly be understood even by domain experts.  XAI algorithms follow the three principles of transparency, interpretability, and explainability. A model is transparent “if the processes that extract model parameters from training data and generate labels from testing data can be described and motivated by the approach designer.” Interpretability describes the possibility of comprehending the ML model and presenting the underlying basis for decision-making in a way that is understandable to humans. 
Explainability is a concept that is recognized as important, but a consensus definition is not available.  One possibility is “the collection of features of the interpretable domain that have contributed, for a given example, to producing a decision (e.g., classification or regression)”.  If algorithms fulfill these principles, they provide a basis for justifying decisions, tracking them and thereby verifying them, improving the algorithms, and exploring new facts. 
This approach addresses a fundamental weakness in AI LLM Models. 

Google's Gemini

Google has its own LLM called Gemini.  Here is a chart of the results of the accuracy Of Gemini so far:

































































This video offers a good overall picture of the state of artificial intelligence:

Here is another video on the limits of current artificial intelligence:

Here is a comparison between ChatGPT4 and Google's Gemini.

CONCLUSIONS

Here is one view: 
The future of large language models holds exciting possibilities. Advancements in research and development are expected to lead to models with a highly sophisticated understanding of language, enabling more nuanced and context-aware interactions. Large language models will likely evolve into domain-specific models, catering to specialized fields such as healthcare, law, finance, and more. Additionally, they are anticipated to enhance their multi-modal capabilities, seamlessly processing and generating text, audio, image, and video data. With a growing emphasis on ethical AI, the future of large language models will prioritize responsible development, ensuring fairness, transparency, and accountability in their design and deployment.

 Vanessa Bates Ramirez wrote an excellent article in Singularity Hub on the future implications of AI quoting Ian Beacraft:

Beacraft pointed out that with the Industrial Revolution, we were able to take skills of human labor and amplify them far beyond what the human body is capable of. “Now we’re doing the same thing with knowledge work,” he said. “We’re able to do so much more, put so much more power behind it.” The Industrial Revolution mechanized skills, and today we’re digitizing skills. Digitized skills are programmable, composable, and upgradeable—and AI is taking it all to another level.

New Problems with AI Coding

Dr. Jeffrey Funk on Linkedin writes on the effect of a lot of programming code being written by AI:
New research on the effect of AI-powered GitHub Copilot on software development finds a “significant uptick in churn code and a concerning decrease in code reuse.” The report concludes with this “question for 2024: who's on the hook to clean up the mess afterward?" 
1)  “Code churn,” or the percentage of lines thrown out less than two weeks after being authored, is on the rise and expected to double in 2024. The study notes that more churn means a higher risk of mistakes being deployed into production. 
2)  The percentage of “copy/pasted code” is increasing faster than “updated,” “deleted,” or “moved” code. “In this regard, the composition of AI-generated code is similar to a short-term developer that doesn’t thoughtfully integrate their work into the broader project,” said GitClear founder.
 
This study by GitClear comes to conclusions quite different from previous ones including the research from GitHub in 2022: 
"Developers who used GitHub Copilot completed the task significantly faster – 55% faster than the developers who didn't use GitHub Copilot."  
GitClear’s analysis of 150 million lines of software code shows that the productivity advantage comes with increased costs down the road: AI code assistants are very good at adding code, but they can cause “AI-induced tech debt.” “Fast code-adding is desirable if you’re working in isolation, or on a greenfield problem.” “But hastily added code is caustic to the teams expected to maintain it afterward.” 
An MIT professor agrees: that AI is like a “brand new credit card here that is going to allow us to accumulate technical debt in ways we were never able to do before.”
 
The rise of AI coding could also impact how engineers are compensated. “If engineering leaders are making salary decisions based on lines of code changed, the combination of that plus AI creates incentives ripe for regrettable code being submitted,” the GitClear founder said.
He said it’s tough to say whether AI tools will be a net positive for software development. He pointed to the benefits of using  #AI to get custom-tailored code answers, from places such as Phind. But he also said reading bad code “is the most willpower-draining component of the job” for developers.  
A study by McKinsey last year found that a “massive surge in productivity” from AI coding is possible, but it depends on task complexity and developer experience. “Ultimately, to maintain code quality, developers need to understand the attributes that make up quality code and prompt the tool for the right outputs,” the study said.  
I don’t think this will be the last word on CoPilot and generative AI for #coding. There will be a long slow process of experimentation that will eventually give us higher #productivity in coding. It’s just that this experimentation will take many years.
It is still unclear what kind of an impact would AI have on employed workers.  An MIT-IBM Watson AI lab researched the question.  They write:
The study found that only 23 percent of workers’ wages could be cost-effectively replaced by AI. The researchers also predicted that it would still take decades for computer vision tasks to become financially efficient for companies, even with a 20 percent drop in cost per year.  
Computer vision in AI allows machines to draw information from visual and digital inputs. In a hypothetical bakery, used as an example in the study, computer vision was used to inspect ingredients for quality control. But that task is only six percent of their work and would cost more to install and operate the technology than for a human to perform the task. 
The study, financed by the MIT-IBM Watson AI Lab, used online surveys to gather information on roughly 1,000 visually assisted tasks in 800 occupations. It found that in many cases it was more expensive to install and maintain AI systems than for a human to perform the same tasks.
There are security concerns. These security concerns are making companies hesitate to use AI apps.
“We hear how these automated technologies could open up the door to breakthroughs across fields ranging from science to education, making life better for millions of people,” Federal Trade Commission Chair Lina Khan said last week during an agency tech summit hosted online.  
“But we’ve also already seen how these tools can turbo-charge fraud, automate discrimination, and entrench surveillance, putting people in harm’s way,” she said.  
Prominent companies including JPMorgan Chase, Northrup Grumman, Apple, Verizon, and Spotify have entirely blocked internal use of ChatGPT, the wildly popular generative AI tool created by Microsoft-backed OpenAI, with several citing privacy and security concerns, CNN Business reported in September. 
Cisco’s research found that many individuals have entered information into AI tools that could be problematic, including employee information (45%) or non-public information about the company (48%). 
In a November survey by Paro, a technology-based finance talent pool provider, 83% of finance leaders viewed AI as a technology that is crucial to the future of finance, but 42% had not yet adopted it. In the study, cybersecurity and data security ranked as a top AI concern (54%), followed by the loss of human judgment or oversight (39%), and cost of acquisition or integration. 
“Everybody agrees that this is the future, but there are quite a few concerns about how to implement and govern it,” Paro CEO Anita Samojednik previously told CFO Dive.
The Problem of Hallucinations


No doubt, some of these security concerns are being intensively addressed.  Where all of this will end remains to be seen, but we suspect they will be solved in time. 


No comments:

AI & MEDICINE


 See These Pages: FUTURISM TECH TRENDS SINGULARITY SCIENCE CENSORSHIP SOCIAL NETWORKS eREADERS MOBILE DEVICES 
 Coming soon.