Hello John and Doves,
Looking at these recent July
article titles certainly helps explain this recent Senate
bill to prevent Artificial Intelligence from launching
nuclear weapons.
So
AI Has Come to This?
"The US Army wants help with
"continuous, real-time predictive visualization" of enemy
actions. The project is spurred by fears that human
analysts won't be able to keep up with complex
warfare. The US Army wants AI that can predict what
the enemy will do just minutes before the enemy actually
does it."
The U.S. DoD is running military exercises now until the end
of July "to determine AI's use in decision-making and
regarding sensors and firepower." So far, the results
are "highly successful". "Very fast." "The
AI tools..could handle a request that included processing
secret-level and classified data - which would take humans
hours or days to complete - in just 10 minutes."
They "just did it live...with secret-level data."
They are testing AI with a Chinese
invasion of Taiwan.
And they are concerned with
something that AI has a habit of doing -
hallucinating. "Hallucinations refer to instances when
an AI generates untrue results not backed by real-world
data. AI hallucinations can be false content, news, or
information about people, events or facts."
The U.S. military "used a tool
called Donovan by developer Scale AI to determine the
outcome of a hypothetical war between the United States and
China over Taiwan....This test ...was based on 60,000 pages
of American and Chinese military documents and open-source
information."
So, would AI be better to have in
control of our military 'assets' over drag-show queens and
gender confused military personnel?? Hmmmm.
I can certainly see the desire to
use AI for it's speed and ability to analyze a lot of data -
but the part about 'hallucinations' is certainly
concerning. We don't want AI telling us a Russian
fishing boat in the Bering Sea has been 'hallucinated' into
a Russian Pacific Fleet invasion of Alaska!
Pray for the peace of Jerusalem!
Maranatha!
Chance