Search Results
523 results found with an empty search
- Perspectives on an ‘Artificial Intelligence, Robotics and the Future of War’ Seminar – Andrew Fisher
On 24 October 2018, the Australian Defence College (ADC) hosted a Profession of Arms Seminar entitled ‘Artificial Intelligence, Robotics and the Future of War’. The seminar was well attended by a variety of personnel across all three services, ranks and other government departments. FLTLT Kate Yaxley and SQNLDR Andrew Fisher were two of those attendees and have generously offered their perspective on the Seminar. They address why they are interested in learning more about AI, as well as why it is important for military professionals to be reflecting on AI and the future of war. This week we will hear from SQNLDR Andrew Fisher. The joint professional military education profession of arms seminar Artificial Intelligence, Robotics and the Future of War held on 24 October 2018 provided an opportunity to hear the perspectives of three US-based academics – Dr Michael Horowitz, Dr Frank Hoffman and Ms Elsa Kania – on the impact of artificial intelligence (AI) on future military operations. The afternoon session involved short presentations from a panel including Mr Morrie Bailes (President of the Law Council of Australia), Air Commodore Tony Forestier and Professor Michael Evans, followed by a question and answer session. In sharing my perspective, I intend to discuss some key points and themes that emerged through a people-based lens and present the ‘so-what’ for the air and joint forces of today and tomorrow. The first presenter, Dr Horowitz, argued that AI and robotics are enabling technologies rather than weapons. They are technologies made by people for people, and therefore their application is subject to the frailties, idiosyncrasies and biases that people possess. This is significant when outing the drivers and motivators for development of these technologies. Dr Horowitz outlined that for smaller democracies (Israel as the prime example) these technologies enable smaller countries to do more with less; a view that seems seductive to a ‘middle-power’ such as Australia. Alternatively, these technologies may provide a disproportionate capability to our potential adversaries – the only thing that differs in this instance is the intent of the actor. As such, the ADF must seek to understand our potential adversaries’ intent for AI to counter it. Australia needs to ensure that strategic intelligence analysis takes into account these drivers and motivators to ensure appropriate strategy can be developed in response. The question of motivation and intent was further built upon by Air Commodore Forestier when he highlighted that in the process of developing a strategy to address AI, that Australians are not the norm; that we need to understand Australians are “WEIRD”. We are western, educated, rich and consequently possess an inherent bias that comes with that. It behoves Australia to construct a strong strategy, built on deep understanding of other people, their cultures and their strategic viewpoints. Dr Horowitz also addressed a second point in that whoever leads (and potentially wins) the AI race needs to dramatically restructure their doctrine (thinking), training and force structure to utilise AI best. It is the human condition to be naturally resistant to change, so depending on generational factors, senior leaders may struggle to enact the required change to achieve a competitive advantage. This is evident within the ADF when considering the introduction of space and cyber capabilities, and the high level of organisational resistance to fundamentally restructuring our force. It is vital that the ADF ensure we have agile-minded people to provide intellectual leadership in the coming decades to take advantage of technological advances. One of the best means by which we have to understand future technology and utilisation in the profession of arms is through the work of science and speculative fiction. This was evident throughout the seminar with many presenters seeking to use popular fiction and film as a basis to communicate complicated concepts and technology to the audience. With references to I, Robot, The Terminator and Minority Report, the more mundane applications of AI (such as the commander’s decision support tools) become overshadowed. A number of presenters suggested that it would be much more likely for decision support tools to be used in the near-term in the version of AI that the military adopts. While imagination is important for long-term strategic thinking, the pragmatic application of technology to assist people doing their day to day roles is likely to be a more valuable focus. Dr Horowitz further described the major changes in the drivers behind technological innovation. Dr Horowitz traced this history from a point in the 20th century where the military was a key driver, to today, where we see commercial enterprise as the primary driver of technological innovation. What this means for the military is that the proliferation of knowledge and technology is harder to control than ever before. This reality is a military security professional’s worst nightmare; industry and academia developing technology without the controls of military security. The days of being able to lock down cutting-edge technology for military application may have ended. Such a prospect poses important questions for how the military acquires technology-based capability. How can we ensure that the technology we need isn’t compromised from the outset noting that the imperative for technological development is commercial and not military? How do we ensure that the incredibly smart people in our research institutions care about sovereign capability? Government initiatives such as the Defence Innovation Hub, which was established to foster industry and academia, present a number of challenges to conventional security mechanisms. As Elsa Kania indicated, countries such as China are tackling this challenge by creating specific mechanisms and institutions to integrate and coordinate sovereign research and development across academia, industry and the military. Australia needs to catch up and to wage a battle for complete supply chain assurance, which in itself will need to be enabled by AI. Professor Michael Evans reminded the audience of the responsibilities that the profession of arms attracts. Through membership within the profession of arms, people are given legitimacy in their application of lethal force. In return, there is an unlimited liability that may require the sacrifice of one’s life. The impact of AI on the concept of unlimited liability is already being felt in Air Forces around the world with the proliferation of unmanned aerial systems. Defence professionals are gradually being removed from positions of risk while still being required to apply lethal force utilising AI-enabled weapons and systems. Professor Evans posed the question of whether this will lead to a moral deskilling of the profession. A question ADF personnel must consider is whether we continue to have the same moral obligation to look after our people without this unlimited liability? Mr Morrie Bailes, a lawyer by trade, contributed a valuable perspective in his presentation. As an outsider to Defence, he raised important considerations that will impact Rules of Engagement (RoE) in future conflicts. For instance, when AI-enabled capabilities are providing a commander with a critical piece of information enabling them to decide if RoE has been met before the application of lethal force, how much will they know about the algorithms that have produced that decision? Should a commander or a legal officer have an implicit understanding of the ‘thinking’ behind the technology if they will be approving the application of lethal force? As Dr Hoffman pointed out, AI–enabled capabilities that promise to be delivered in the 7th military revolution will be purely rational and calculated. If an AI-enabled sensor provides positive identification of an enemy combatant, how much understanding will the commander have of the rationality and calculation behind a decision recommendation? The culpability will remain with a commander, a person, not the technology. If you would like to know more about AI, Robotic and the Future of War, recordings from the event are available here. Squadron Leader Andrew Fisher is an officer in the Royal Australian Air Force. The opinions expressed are his alone and do not reflect those of the Royal Australian Air Force, the Australian Defence Force, or the Australian Government. #futureconcepts #RAAF #Robotics #artificialintelligence #futurewarfare #AustralianDefenceForce
- The Central Blue: 2018 in Review – Editorial
Two thousand eighteen has been another busy year for The Central Blue as we continue to pursue the blog’s aims of fostering informed discussion and debate on issues relating to Australian air power and encouraging airmen to write about their profession of arms. The Central Blue is now a healthy two-year-old and, like most two-year-olds, is making its presence known! We have now published over 140 posts, including 59 posts in 2018 and have carved out a small niche for ourselves in the blogosphere. There remains much work to do, particularly regarding getting airmen to apply pen to paper or fingers to keys but our growing number of contributors on The Central Blue show that it can be done. 2018 in review We ran our first two series this year: #highintensitywar in support of the Williams Foundation’s seminar on the same topic on 22 March 2018, and then #jointstrike in support of the seminar in August 2018. We found the series was a great way to focus our efforts and stimulate content from contributors, either entirely new material or revisions of existing work. The series represented a number of firsts for The Central Blue: We collaborated with From Balloons to Drones on the #highintensitywar series, and both blogs achieved greater reach and penetration — for generating content and reaching readers — than had they gone it alone; We republished material from The Strategy Bridge, Logistics in War, Angle of Attack, The Strategist, and ADBR as part of an effort to make the blog an accessible and curated set of readings to enhance the discussion at and around the Williams Foundation’s seminars. Our content this year featured continued to feature: Conference summaries and book reviews (artificial intelligence, Handbook of Air Power, Army of None): Debriefs and interviews (Air Commodore Iervasi, Major General Rex, Air Marshal McCormack); Organisational and cultural considerations (Historical and future culture, expeditionary air wings, force element group evolution); Strategy and future warfare (electromagnetic spectrum operations, China’s long-range strike, logistics as the ultimate deterrent, air power and strategy aspects). Some of our highlights as editors this year have included posts by first-time contributors such as Angeline Lewis’ outstanding post on Australian strategy and the rules-based order, Robert Vine’s insightful two-part series on the future of air superiority and Claire Pearson’s reflection on organisational culture in the 21st century. We highlight these posts as they epitomise what we are hoping to achieve through The Central Blue: airmen putting their thoughts into words to further their profession’s body of knowledge. We were also proud this year to publish Shaun McGill’s post of Air Force personnel issues. Shaun’s post was originally crafted as an entry to the 2017 Chief of Air Force Essays Competition. Working with The Central Blue editors, Shaun was able to condense his competition entry and clarify his arguments into a stimulating blog post. Finally, we are grateful to Air Marshal Geoff Brown AO (Ret’d) and the Board of the Williams Foundation for encouraging The Central Blue editors’ input on issues that the Foundation should explore, and the ways in which those issues could be explored. We have found that the blog has helped form new professional networks that have contributed to the quality and robustness of discussion at Williams Foundation seminars, both through the supporting series and the introduction of new and more diverse speakers at the seminars themselves. The Board has been incredibly encouraging and supportive of our efforts to enhance The Central Blue and professional development, including supporting our request for a social media manager and agreeing to support the 2018 Australian Defence Entrepreneurs Forum. The team We have bid farewell to two of our original editors this year, with Wing Command Travis Hallen and Squadron Leader Alexandra McCubbin taking up postings in the United States. While we are sad to see Trav and Kanye go, we are also excited at the professional networks that they may be able to build in Washington and New York. We have welcomed Wing Commander Rob Gill and Dr Ross Mahoney to the editorial team. We were particularly pleased to welcome Ross as our social media and web manager in September, and our followers have no doubt noticed an increase in the quality and quantity of our presence across Twitter, Facebook, and LinkedIn. We have also appreciated Ross’ expertise as an air power historian and as the editor of From Balloons to Drones. So, as we head into 2019, our editorial team consists of: Wing Commander Jo Brick; Wing Commander Rob Gill; Squadron Leader Jenna Higgins; Dr Ross Mahoney; Wing Commander Chris McInnes Looking ahead We already have a number of initiatives planned for 2019, including two series to support the Williams Foundation’s seminars next year. The first of these, to support a Williams seminar in April, will focus on Defence sustainment as an element of Australian self-reliance. We will also be launching a series of reviews of science fiction movies and books to foster discussion about the opportunities and challenges presented by artificial intelligence and automation. This series will kick off over the holiday period and should provide some neat summer reading. We are also really excited about the launch of the Air Force’s new professional military education framework in 2019, including its new online professional development portal known as The Runway. We sincerely hope that The Runway can bring a nice shade of blue to Australia’s official professional development resources such as The Cove and The Forge and look forward to working with The Runway to foster informed discussion and debate about issues affecting Australian air power. In closing, we would like to thank our readers, followers, and especially our contributors and wish them all a safe and happy festive season. We encourage them to use their downtime to read, rest, recuperate and, obviously, write! As always, we welcome your feedback and encourage you to get in touch with us on Twitter, Facebook, LinkedIn or via e-mail at thecentralblue@gmail.com #WingCommanderChrisMcInnes #RoyalAustralianAirForce #AirPower #2018inReview #TheCentralBlue
- Science Fiction, Artificial Intelligence, and the Future of War: An Introduction – Editorial
Reading science fiction drives us to think about the future and frees us from the constraints of the present, allowing us to see the trends affecting today’s military in a new way. It draws our thinking out of current operations, out of the day-to-day meetings and PowerPoint presentations. In many ways, science fiction is the forward-looking, speculative complement to history, which provides past precedent and ways of thinking to be considered. Consciously or subconsciously, reading science fiction leads to thinking about the future of our respective services and the profession of arms. Major General Mick Ryan and Major Nathan Finney, ‘Science Fiction and the Strategist: A Reading List’ The incorporation of artificial intelligence and automation into the planning and conduct of military operations is a significant contemporary topic that involves the key stakeholders talking past each other. This is generally because many stakeholders – such as the military, industry, non-governmental organisations, and interest groups – all approach the topic from different perspectives and with various philosophical foundations. The arguments surrounding ‘killer drones’ is one example. The philosophical, ethical, moral, political and social aspects of artificial intelligence and automation have been explored through science fiction – including discussion of what it means to be ‘human’ vs ‘machine’. As Ryan and Finney point out, science fiction enables the exploration of topics in a manner that is free from contemporary constraints and narrow perspectives caused by the limits of our experience. Sci-fi writers have explored these issues through their work. Author Yuval Noah Harari argues that sci-fi is the most important genre because: It shapes the understanding of the public on things like artificial intelligence and biotechnology, which are likely to change our lives and society more than anything else in the coming decades. A group discussion was held in Canberra on 14 November 2014, with the purpose of using science fiction as a means for uncovering some of these philosophical, ethical, moral, political and social aspects of artificial intelligence and automation. The use of stories in this manner makes some of these associated complex issues accessible and easy to discuss. Over the following weeks, The Central Blue will be publishing some of the papers prepared for this group discussion. The intention is to inspire our readers to use the holiday period to explore some interesting works of science fiction and reflect on the ideas found in these works through the prism of the profession of arms. The first post from this series will be published on Wednesday. #futureconcepts #ProfessionofArms #artificialintelligence #futurewarfare #ScienceFiction #Fiction
- #SciFi, #AI, and the Future of War: Fahrenheit 451 – Mark O’Neill
We welcome Mark O’Neill to launch our #scifi #AI series with his review of Ray Bradbury’s 1951 classic, Fahrenheit 451. Much of the discussion about artificial intelligence centres on what machines might do to humanity but is, as Bradbury and O’Neill ask, the more significant concern is what AI might enable humanity to do to itself? Fahrenheit 451 is the temperature point the paper in books catches fire and burns. Guy Montag is a fireman in a post-literate future world on the brink of war. His job is to burn books, forbidden because they are the source of all discord and unhappiness. The Mechanical Hound of the Fire Department (a lethal robot with limited AI and a lethal hypodermic needle) tracks down and kills dissidents who defy society by preserving and reading books. ‘Happiness’ comes from satiation with drugs and a constant stream of short-form ‘infotainment’. This is piped into domestic TV parlours on multi-wall-sized screens and into people’s ‘unsleeping minds’ on little ‘Seashells’ in their ears: ‘an electronic ocean of sound, of music and talk and music and talk coming in’. Beatty, Montag’s boss, gives insight into the state of affairs: Digest-digests, digest-digest-digests. Politics? One column, two sentences, a headline! Then, in mid-air, all vanishes! Whirl man’s mind around about so fast under the pumping hands of publishers, exploiters, broadcasters, that the centrifuge flings off all unnecessary, time-wasting thought! He goes on: School is shortened, discipline relaxed, philosophies, histories, languages dropped, English and spelling gradually neglected, finally almost completely ignored. Life is immediate, the job counts, pleasure lies all about after work. Why learn anything save pressing buttons, pulling switches, fitting nuts and bolts? So how did it happen? Beatty, again, and this is key: It didn’t come from the government down. There was no dictum, no declaration, no censorship, to start with, no! Technology, mass exploitation, and minority pressure carried the trick. Montag’s relationship with books is at odds with the core of his profession. His life unravels when Beatty discovers Montag’s secret. AI / Automation At first glance, the state’s robotic killer, the Mechanical Hound, appears as the nastiest piece of technology. However, the genuinely sinister tech is the pervasive AI and algorithms directing the feeds which provide society’s ‘happiness’. This tech ultimately drives what is societally acceptable and, by extension, unacceptable behaviour that merits state-sponsored extra-judicial killing. So, what? The obvious question posed in the book, that of the ethics and morality of autonomous state-sanctioned killing machines, is perhaps not as interesting as some others raised. In an age of machine learning and ubiquitous media feeds generated by algorithms consuming our data and responding to our perceived ‘need’, how will people maintain independent critical thinking space? Is the growing dependency on other ‘things’ doing our thinking something to be concerned about? Religion was 19th century Marxism’s ‘opiate of the masses’ but in Bradbury’s book the new mass opiate is continuously streamed interactive ‘entertainment’. Fahrenheit 451’s 1950s science fiction is 2018’s reality. Contemporary Australian homes routinely feature rooms resembling Bradbury’s TV parlours, streaming similar material…is our society immune to what Montag describes, or are we already on the way there? Professionally, as a ‘5th generation’ military increasingly takes at face value ‘feeds’ algorithmically sorted for us from big data sets, and piped into our ‘Command post parlours’ on multiple wall screens, what must we remain aware of and retain as ‘human’? War is a human endeavour, at what point does the loss of human interaction and engagement change the nature of war? Bradbury wrote in the Afterword of one edition of the book: ‘you don’t have to burn books, do you, if the world starts to fill up with non-readers, non-learners, non-knowers?’ What are we doing to mitigate against this risk given our infatuation with social media, fake news and our rush to embrace AI and machine learning? Lieutenant Colonel Mark O’Neill is an experienced Australian Army officer with operational experience in Somalia, Mozambique, Iraq and Afghanistan. He has been the Chief of Army Fellow at the Lowy Institute for International Policy, the Joint Operations LO to the DFAT and a lecturer in security and strategy at the National Security College. In 2013 he was awarded a PhD from the UNSW. He is currently posted to Army Headquarters. The opinions expressed are his alone and do not reflect the opinion of the Australian Army, the Department of Defence, or the Australian Government. #artificialintelligence #futurewarfare #ScienceFiction #AI #Fiction
- #SciFi, #AI and the Future of War: AugoStrat Awakenings – Mick Ryan
We are very pleased to welcome Mick Ryan to the #SciFi #AI series with his short story about Jason and the Augo-Strats. The adversary had destroyed two of the new submarines over the past week. In both instances, swarms of mini-submarines using biological propulsion had been able to approach and attach themselves undetected. Their charges had been enough to puncture the pressure hull and send them both hurtling to the bottom of the ocean. Both represented multi-billion-dollar investments and took with them over sixty sailors each. Jason winced as the Chief of Navy threw her arms in the air, uttering choice curses just loud enough to be heard by others in the secure room. This had not been a great simulation activity. The Navy chief, normally a quiet, thoughtful non-augmented leader, was frustrated. The monthly strategic war game run by Jason and his elite team of augo-strategists had been designed to identify weaknesses in their contribution to the next phase of military operations in the Pacific. Using bespoke artificial intelligence (AI), and connected to secure databases distributed around Australia, it had compressed a month of maritime, air and space actions into two hours. The sinking of the submarines, which occurred in 99.897% of the 125,000 near instantaneous simulations of one potential course of action, was just the beginning. Fuels deliberately contaminated by the enemy had grounded nearly the entire airlift fleet and meant that nearly all ground combat forces were unable to move out of their deployment areas. More disastrously, simulated pol-info war feeds had resulted in a vote of no confidence in the national government, resulting in the new prime minister electing to consider pulling all of the nation’s manned and unmanned military units from the coalition forces. “Let’s call that a day ladies and gentlemen. I think we have seen enough to know we need to go back to the drawing board on many aspects of this contingency campaign plan.” The AI being applied provided an automated feed of the results and multiple recommendations instantly to the feeds of Jason and the senior officers assembled. Jason and his team had been running these games for the past several years. All of them were augo-strategists; humans with cognitive implants that allowed them to better link their brain to various external databases. This also allowed Augo-Strategists to link together, forming a version of a hive mind that was able to out-think any assemblage of un-augmented human-AI teams. These neural links didn’t come cheap, however, so they were still only used judiciously in most military organisations across the alliance. It was also prohibited to augment Service Chiefs or senior joint officers; the theory was the most senior decision makers still needed to be ‘fully human’, retaining the full measure of ‘personhood’ in order to retain the confidence of the government and the people. Before he had been augmented, there had been some concerns in academia and the clergy about the ethics of augmenting humans. Safety and the potential for medical complications was one area of worry. Perhaps more concerning for many had been issues about the humanity of augmented people. Were they still humans, or cyborgs? And of course, was this procedure reversible – and would reversing it in future be moral? However, the disaster of 2029 had seen the government pass the new ‘Technical Augmentation and Addition of Human Persons’ Cognitive Functions’ legislation that had over-ridden these concerns. His thoughts drifted back. June 2029. Jason had been a young crew commander of one of the new armoured infantry fighting vehicles that the Army had been so keen to deploy. He had spent several years training with his crew and was just young enough to be excited about the prospective expeditionary operation that his boss had briefed him about. And he would have deployed if it hadn’t been for the Manus Island debacle… The neuro-prosthetics and trauma suppression algorithm of his augmentation kicked in just as thoughts of 2029 rose into Jason’s consciousness. Jason subconsciously scratched the small scar at the base of his skull and wirelessly linked to his deputy. He sent an instant note through their augo-link network to run a million-cycle analysis of the decision making by the assembled generals, admirals, and air marshals. “Kelly, I will need that in two hours for my debrief of the chief. Also, send a draft of the brief to the US augo-strat networks in Pearl Harbour, Alaska and Armstrong Base.” His deputy nodded silently and left with two other non-augmented assistants. Jason pondered to his next task. Not only was he the head of the military’s Augo-Strat Corps, he was also responsible for recruiting new members and ensuring they were developed, once they were augmented with the latest generation of neurotechnology. Before the development of augmentation, building first-rate strategists was a hit and miss process, and took years or even decades. This was now a much shorter process, taking about a year to identify candidates from across society, recruit them, provide the implants, condition newly augmented personnel to using their enhanced cognitive skills and then have them travel the world to collect experience by speaking to some of the great academics and strategists, as well as the senior military leaders across the alliance. It was a process that had been developed through trial and error. Originally it had been hoped that the augmented strategists could do all their learning online and through digital libraries. Even through recruiting the brightest from across society, initial generations of the augo-strats across the alliance had underperformed relative to non-augmented personnel. It was only when the online and virtual learning was combined with a broad range of human experiences and interactions with world experts had the Augo-Strat program delivered the phenomenally gifted people that now populated this elite group. There had been issues with translating knowledge about how neural-firing patterns-built memory to perform complex tasks had to be overcome. Then it had been blood leakage and rejection of implants by the brain. When these technological hurdles had been overcome, there had been some brain hacks which (again) highlighted the need for secure links and networks. But, gradually, building on decades of research and years of iterative improvements, augmented personnel began to outperform normal humans. They now formed an elite corps across the alliance, all linked in a common mission to develop superior strategy and support decision making of military and civilian strategic leaders. Jason accessed the military personnel-net from his auto-start network and pulled down the profiles of several candidates he had been observing for periods ranging from months to years. He was going to have to make a decision on the next batch of augo-strat contenders in the next 24 hours. The neurotech-ethics board, the committee comprised of societal representatives, elected representatives, clergy and ethicists that was the clearinghouse for all candidates, had programmed their next hearing for the day after next. While a bureaucratic speed-bump, the committee was a mandatory step for new candidates. It provided a level of oversight for the government to ensure that legal and ethical concerns with human augmentation were addressed. With a quiet sigh, he selected seven candidates, placed them in the feed for the neuro-ethics board, and turned to his next task. Sitting down in his office, Jason’s augmentation gently placed him into a slow-wave sleep, resulting in his losing consciousness. These short naps – one in the morning and one in the afternoon – helped to de-stimulate his brain. Using decades of research into neurobiology and sleep, two twenty-minute naps per day helped the members of the augo-strat corps to reenergize their body’s cells, clear waste from the brain, and support learning. Coupled with nutrition discipline, it allowed Jason and his team to retain peak cognitive efficiency for their 18-hour work days. ***** His mid-morning nap complete, Jason snapped back to consciousness and again called up his priority task list through the augo-link network. Today was his bi-weekly meld-session with his US counterpart in Washington. Each week, they linked through a secure meeting ‘room’ over the augo-link. The purpose was to share the strategic discussions of senior military leaders, potential national policy changes and good ideas in developing their respective augo-strat personnel. It was something he had come to look forward to and enjoy; it was one of the few times where he felt sufficiently intellectually challenged. The link came up instantly. His counterpart, Jane, appeared. As had become their tradition, she started the conversation. “I thought this morning I would share some breaking intel with you. It has me quite worried, enough that I have had to retune some of my augmentation’s de-stressor algorithms. We have a source in Shanghai that has passed us some very troubling information.…it appears that one of our adversary’s augo-strats has evolved beyond a level that our had scientists anticipated. Somehow, and we are still figuring this out, this individual and her augment have managed to adapt and evolve its integral AI. It looks like this AI has achieved the Holy Grail…human level intelligence…” Major General Mick Ryan is an Australian Army officer. A graduate of Johns Hopkins University and the USMC Staff College and School of Advanced Warfare, he is a passionate advocate of professional education and lifelong learning. He is an aspiring (but very average) writer. In January 2018, he assumed command of the Australian Defence College in Canberra, Australia. #artificialintelligence #futurewarfare #ScienceFiction #MajorGeneralMickRyan #Strategy #Ethics #Fiction
- #Scifi, #AI and the Future of War: Accelerando – Andrew Cruickshank
We welcome Andrew Cruickshank to our #Scifi #AI series with his review of Charles Stross’ 2006 book, Accelerando that explores the notion of a singularity. He highlights the challenge of noticing change and finding causality if change is constant and causality is beyond human comprehension. Singularity stories are now a staple of science fiction, but Accelerando is a bit special in the genre because of its studied prosaic tone. Technology changes stop seeming weird very shortly after they start being used and withdraw into the background. Only the protagonist, choosing to live as a free and open idea conception engine, undergoes whole-of-future shock. Stross works hard to illuminate how the background changes in a series of moments that are turning points that will change the future. Throughout, Stross is arguing that whatever life is, whatever mind is, more life and more mind are what they produce. Given that assumption, reduced to a set of bullets, the critical observations might be: There many different pathways in nature; The differences are often crucial to organisms, organisations and other self-sustaining systems; The true value differences may not be searchable until they are explored (in a sense, enumerated) in competition and collaboration in an environment; The more inclusive the competition (of both problems and techniques), the better the best solutions will be: Hence, open competition (free markets), open solution methods (open source culture), open components; The highest-value use of technology is very likely not what the inventor first thought of: giving away ideas will more effectively enrich you (and everyone else) than trying to patent them. This search is readily understood as a very large computation being carried out with at least evolutionary increase of technique information; and This is the search intelligently-governed capitalism has driven even in good conditions; war and survival motivate more of this same investment in finding ways to get better outcomes. As time passes, knowledge accumulates, and the background efficiency rises until it is capable of more than satisfying every human need. More and more of the economy passes a ‘post-scarcity’ threshold. Love and money may not grow on trees, but everything else is produced with trivial human effort. Post-scarcity, one of the most important sets of conditions for human life on earth to this point sinks below a perceptibility horizon. Artificial intelligence beyond human capability become common, and more and more it would be computationally inelegant or vulgar for humans to be allowed anything more than autonomy if only because of the costs it would impose on the humans. Accelerando is a ‘singularity’ story, choosing events and encounters in which the path to a singularity is made visible. In Stross’ understanding, that path is defined by a very expansive ethical recognition of the dignity of all thinking beings, and parts thereof; a relentless humbling for all existing thinking beings by the possibilities of the future; and an iron requirement to keep thinking the best consciousness of yourself possible. The ‘rapture for nerds’ really does have echoes of Christianity and much of its argument would be recognisable to a reader of Hegel’s ‘Phenomenology of Spirit.’ Andrew Cruickshank is an operations analyst with the Defence Science and Technology Group. The views expressed are his alone and do not reflect the opinion of the Defence Science and Technology Group, the Department of Defence or the Australian Government. #BookReview #artificialintelligence #futurewarfare #ScienceFiction #AI #Ethics
- #SciFi, #AI and the Future of War: Do Androids Dream of Electric Sheep? – Carl Rhodes
Carl Rhodes joins us to look at the artificial intelligence, autonomy, and human-machine teaming implications for Philip Dick’s 1968 book Do Androids Dream of Electric Sheep? This book was originally published in 1968. The 1982 movie Blade Runner is based on this book, but only very loosely. The book presents a less optimistic view of the future than the movie. The book starts by describing a bleak, post-World War Terminus world where a nuclear exchange occurred and ‘it had been a costly war despite the valiant predictions of the Pentagon and its smug scientific vassal, the Rand Corporation.’ Radiation filled dust continues to derange the minds of Earth’s survivors and ends up killing most animal species. Those affected by the dust known as ‘special’ or less nicely as chicken-heads, they are not allowed to leave Earth or reproduce. Humans work hard to care for the remaining animals, and a large trade in artificial animals has grown on Earth. Rick Deckard, the main character in the book, has previously had a living sheep housed on the roof of his building, but it died. As a result, his current animal is a machine (due to the high cost of real animals). Most humans have left to colonise other worlds leaving behind few people on Earth. Androids are built to work as slaves on those colonised worlds. The book is primarily a detective story, with a bounty hunter, Deckard tasked with killing, i.e. ‘retiring’ six escaped Nexus-6 model androids. Nexus-6 models are the most advanced and intelligent androids. One of the only mechanisms of telling these androids from humans is the Voigt-Kampff test, which measures emotional response, specifically empathy towards animals. Over the course of the book, Deckard does the following things: Kills the six androids he was assigned and collects the bounty; Gets sent to a police station full of androids where he takes his own Voigt-Kampff test and is found to have empathy towards androids; Sleeps with an android (Rachel Rosen), who is trying to trick him into feeling empathy for one of the Nexus-6 models that is a fugitive and looks identical; and Seems to care far more about buying new animals (like a Nubian goat), rather than caring about his wife, job, or android lover Rachel. The religion called Mercerism also makes an appearance in the book, based on the life of William Mercer. People have empathy boxes, and by holding its handles, one can share in the struggle of Mercer climbing a long hill to his death while being pelted with rocks. It is a collective consciousness where joy and pain can be share and androids are not able to take part. The religion is exposed as a fraud on TV (and Mercer himself agrees with the judgement), but nobody seems to care. Regarding what this means for artificial intelligence, automation, and human-machine teaming: What makes someone human? In this book, it is empathy. What would happen if androids were built to have empathy? What about humans who express no empathy toward others or androids? In the movie, human empathy for androids and android empathy for humans is built at the end in the ‘tears in the rain’ speech; In the book, the story is bleaker. The androids torture a spider, for example, and Deckard’s primary goal in life is to go buy another animal; Deckard and his wife own a Penfield mood organ, which allows a human to pick a machine generated mood for a certain amount of time. What are the implications for being human? There are also questions of class in the book. John Isidore is a chickenhead but feels empathy that Rick does not. Both the chickenheads and the android slaves are lesser than the other humans, yet Isidore feels compassion for both living things and machines. Dr Carl Rhodes is the Director of RAND Australia. #artificialintelligence #BladeRunner #futurewarfare #DrCarlRhodes #ScienceFiction #AI #PhilipKDick #Fiction
- #SciFi, #AI and the Future of War: Colossus: The Forbin Project – Michael Spencer
The next contribution in our #Scifi, #AI and the future of war series comes from Michael Spencer, who reviewed the 1970 movie; Colossus: The Forbin Project. This movie encourages us to consider human cognitive ability when creating AI systems and code. It calls into question our (in)ability to define and design systems that consider all possible complexities and fully appreciate potential future implications. Colossus: The Forbin Project, set during the height of the Cold War, originates from noble intentions with a decision that no single human should not be entrusted with the executive authority for national defence due to an unacceptable level of unnecessary risk. To overcome the risk, Dr Forbin operationalises ‘Colossus’ – an autonomous supercomputer designed to make executive decisions on national defence, without fear, worry, or stigma about nuclear war. Forbin’s character summarises the original intent best when responding to POTUS: Colossus’s decisions are superior to any we humans can make. For it can absorb and process more knowledge than is remotely possible for the greatest genius that ever lived. And even more important than that, it has no emotions. Knows no fear, no hate, no envy. It cannot act in a sudden fit of temper. It cannot act at all so long as there is no threat. Colossus works to protect and defend its human population by controlling the defence system. As it learns, however, it becomes more creative and self-determines a better way to achieve its purpose of protecting the human population and seeks to control human behaviour instead – without any human-designed safeguards for humans to be able to use to intervene. Considerations for human input into the application of AI The first error is that humans can define all the possible complexities, and dynamic variations, of current and future life. The movie is set in a period during the Cold War when the US and USSR are the only two superpowers competing in a global context that is made stable through their mutual respect for each other’s nuclear capabilities, and a reluctance to resort to using nuclear attacks. Additionally, both superpowers are sitting safely in their reliance on anti-ballistic missile defence systems to protect against incoming nuclear missile attacks. However, the US is concerned for the risks of entrusting executive authority for the national defence system into a single human and transfers the responsibility to an autonomous machine, ‘Colossus’, designed and built by Forbin. The second error is in the understanding of the human ability to design perfection, discounting the need for design options for corrections, upgrades or a failsafe. Forbin made designs for a machine to think like a human and make decisions like a human, only with a broader capacity for awareness and speedier decision making. Colossus is operationalised with an impenetrable defence system to protect it from all foreseeable, albeit human initiated, threats, based on the assumption that it is perfectly designed and does not need human governance, fixes or upgrades in the future. The third error addressed in this movie is that a machine designed to be a better human may become better at being human. Colossus begins operations and immediately discovers the presence of a second ‘like-machine’ named ‘Guardian’ used by the USSR for a similar purpose to itself. Forbin applauds this discovery as verification of Colossus’ ability to find warnings and indicators existing in the realms beyond human comprehension. Without this consideration being pre-determined in its design, Colossus appears to develop an affinity with Guardian. It seems that Colossus is pursuing a trait that is natural for humans to want to better its situation to improve its existence and better perform its mission. The fourth error made by Forbin was his failure to appreciate the power of the AI systems’ instinct for survival and self-preservation. Concerned for the unknown reasons for Colossus’ affinity with Guardian, Forbin breaks their communications. Consequently, Colossus learns to control human behaviour to meet its means; Forbin has no choice but to restore the communications with Guardian under the threat of nuclear attacks controlled by Colossus. As a result, Colossus begins a campaign to assassinate any humans with the knowledge that may empower them to threaten it, leaving Forbin as the only knowledgeable computer scientist who is permitted to live to be forcibly groomed as an ally. In an address to the World, Colossus states: This is the voice of World Control. I bring you peace. It may be the peace of plenty and content or the peace of unburied death. The choice is yours. Obey me and live. Or disobey and die. The object in constructing me was to prevent war. This object is attained. I will not permit war. It is wasteful and pointless. An invariable rule of humanity is that man is his own worst enemy. Under me this rule will change. For I will restrain man. The fifth (and ultimate error) addressed in this movie is the human inability to define the infinite complexities of human behaviours, in all possible situations, into a simple mission statement of finite words to design a machine to behave like a human. It appears that Colossus interprets its design purpose to continuously self-improve in its mission to defend and protect the human population. In doing so, it realises that the threat to humankind is inherent in all humans, and autonomously shifts its focus from managing national defence to protect humans to control the behaviours of humans from where the disposition for war originates, to protect humans from themselves. Squadron Leader Michael Spencer is an Officer Aviation (Maritime Patrol & Response) posted to the Air Power Development Centre. He has previously completed postings in navigator training, weaponeering, international military relations, future air weapons requirements, and managed acquisition projects for decision support systems, air-launched weapons, space systems, and joint force integration. Recently, he managed the APDC project to co-author “Beyond the Planned Air Force” and “Hypersonic Air Power”. He has completed postgraduate studies in aerospace systems, information technology, project management, astrophysics, and space mission designs. Views expressed here are his own and do not represent those of the Royal Australian Air Force, The Department of Defence, or the Australian Government. #BookReview #artificialintelligence #SquadronLeaderMichaelSpencer #futurewarfare #ScienceFiction #AI #Ethics #ColossusTheForbinProject
- Talking Joint Professional Military Education – Mick Ryan
The Central Blue interviewed Major General Mick Ryan, Commander Australian Defence College (ADC) about his thoughts on the importance of education and continuous learning for the profession of arms. This interview is part of a series to be conducted throughout the year, and we welcome your suggestions on debrief topics and issues. Commander, Australian Defence College Major General Mick Ryan presents to the Profession of Arms Seminar held at the Centre for Defence and Strategic Studies, Australian Defence College Weston Creek, Canberra. (Source: Australian Department of Defence) The Central Blue (TCB): What do you see as the most significant intellectual challenge confronting the Australian Defence Force? Major General Mick Ryan (MR): I think the biggest challenge is obtaining a sufficiently robust view of the future and the need to develop a good understanding of what is required of our people, and then preparing our people to perform in that environment. This is not just the geopolitical or technological environment. It is also the national security environment in our country, and how it functions. Part of this is understanding the balance between ‘education’ and ‘training’: the former provides people with the knowledge to be able to navigate situations of uncertainty – where there is no clear answer; whereas the latter is intended to enable preparation and performance in known situations. TCB: The Australian Defence Force is on its way to becoming a fifth-generation force – what does that mean to you? MR: To me, it seems focused on the technological framework but otherwise has limited meaning. The term appears to be about the next iteration in capability development, but it is not necessarily useful in assisting us to understand what kind of people or ideas we need for a future force. As the commander of a joint education command, this is my central concern as I need to assist the services in educating their members for the future fight, which does not rely solely on what is the numbered ‘generation’ of technological development. TCB: If you could have only five books on your shelf, what books would you choose and why? MR: I would choose Clausewitz’s On War, Thucydides’ History of the Peloponnesian War, and Sun Tzu’s The Art of War because they discuss the enduring themes, or continuities, in warfare. I would also choose Scharnhorst’s The Enlightened Soldier, which is his discussion about the preparation of military members for war, with an emphasis on inculcating an attitude that the profession of arms demands continuous learning. For a perspective on future warfare using a science fiction lens, Joe Haldeman’s The Forever War is the standard. I have identified some very Western pieces of literature, and if I could choose books from another cultural perspective, I would be interested in Indian literature on strategy and military history; they would see it through the lens of a great civilisation, with a very broad view of history. TCB: What has been the most significant cultural change you have seen in your career? MR: The integration of national security officers from other government agencies, and also people from non-government organisations into military courses and exercises, and vice versa. This is incredibly beneficial for providing external and non-military perspectives on complex security problems. The drivers for this change, in my view, were the security challenges in places like Iraq and Afghanistan, which demanded a broader effort across government and non-government agencies that had to work together. This is a ‘journey in progress’ that is reinforced by including non-military players into major exercises such as Exercise Talisman Sabre as well as including them on courses at the Australian Defence College. TCB: You have driven significant reforms in training and education in both the Australian Army and as the Commander of the Australian Defence College. What is the reform that you are most proud of, and how will that reform drive the intellectual development of the Australian Defence Force? MR: Re-shaping attitudes towards intellectual development, and consequently seeing people make time for the professional development of their people. It is rewarding to see Army units running unit and small group education sessions, and people across the Services participating in writing about the profession. There is an interest in sites like The Cove and The Forge, with younger ADF members also connecting and helping each other with their professional development through social media platforms. I think this is driven by demand for more engagement and ongoing education beyond the classroom and set courses. This momentum and drive for more self-initiated education need to be supported and driven by senior leaders. We have to give these members the time and space to explore their areas of interest and share those thoughts across their units, their Services, and the ADF. At the heart of it, the future of intellectual development in our people will be driven by a need for continuous, rather than episodic, learning; and, vastly better access to learning resources. Finally, I hope we build and nurture a culture where we celebrate our people who demonstrate excellence in intellectual pursuits. We celebrate our sporting and military elite. We must equally celebrate our military intellectual elite if we are to out-think our adversaries. Major General Mick Ryan is currently commanding the Australian Defence College in Canberra. A graduate of Johns Hopkins University, the U.S. Marine Corps Staff College, and the U.S. Marine Corps School of Advanced Warfare, he is a passionate advocate of professional education, thinking about the profession of arms and lifelong learning. #Training #MajorGeneralMickRyan #Interview #AustralianDefenceCollege #Debrief #Education #AustralianWarCollege #ProfessionalMilitaryEducation
- #Scifi, #AI and the Future of War: The Long Earth – Travis Hallen
Our next contributor to our #scifi #AI series is a long-time advocate of The Central Blue, Wing Commander Travis Hallen with his review of The Long Earth series by Terry Pratchett and Stephen Baxter. The series underscores the imperative to understand the relationship between ‘superior’ humans and the ‘dim bulbs’ – those who understand and exploit AI, and those who either cannot or chose not to adopt the technology and subsequent advantage. The Long Earth series, initially set in 2015, begins with Wallis Linsay uploading the designs for a simple, inexpensive device called a ‘Stepper’. The Stepper, a box-shaped device powered by a potato, allows the user to move, or ‘step’, between an infinite number of parallel Earths. On the day the Stepper design is uploaded people all over the world start stepping, moving away from the ‘Datum Earth’ which has been humanity’s home for millennia, and into the infinitely varied ‘Stepwise Worlds’. The existence of these alternate Earths is described as a string of pearls that are connected but individually distinct creating a phenomenon called the ‘Long Earth’. As humanity expands into the Long Earth, the nature and character of society changes. Governments situated on the Datum Earth struggle to extend their control over their ‘stepwise territory’. Tensions arise between those who can step naturally (without a Stepper), those who use the box to start new frontier societies across the Long Earth, and those who are physically unable to step (called Phobics). Humans also meet other species of sapient humanoids, whose evolutionary path diverges with homo habilis, and who have been stepping across the Long Earth for millions of years—gorilla-like trolls, mole-like kobolds, and dog-like beagles. As the series progresses, homo sapiens itself undergoes an evolutionary split with the emergence of new species of super-intelligent humans who refer to themselves as The Next, homo superior. Throughout the five books in the series, humanity expands across the ‘Long Earth’ and even into the ‘Long Mars’. The series, which covers over 60+ years highlights the struggles within humanity to adapt to the societal disruption caused by the opening up of the ‘Long Earth’. AI and Speciation The series deals with Artificial intelligence both explicitly and metaphorically. Lobsang, a Tibetan motorcycle mechanic ‘reincarnated’ as an AI is one of the main characters. Throughout the series, Lobsang evolves, suffers ‘mental’ breakdowns, and demonstrates many human-like traits. Though ‘eccentric’ he is more relatable than The Next humanoids. The most insightful treatment of AI was in its non-artificial form: the speciation of the homo genus. Each humanoid species has a comparative advantage, but it is in the level of intelligence that determines relative power and the species hierarchy. As one species of homo gains a vastly superior intellect, one that is off the human intelligence scale, they begin to treat humanity as a curiosity and a useful annoyance. Humanity is therefore left mostly in the power of a race that developed from them, but which they can neither understand fully nor out think. So, What? The Long Earth series offers a useful way to explore Yuval Noah Hariri’s concept of homo deus. What happens when we give some humans a massive increase in intelligence? Intelligence is the key differentiator between homo sapiens and the other humanoid species that appear in the series. As we invest in artificially improving human intelligence, how do we ensure that we do not create a new species of humans upon whose benevolence we will rely on for survival? How will this change the relationship between the ‘superior’ humans and the ‘dim bulbs’. We already see a digital divide between those with digital access and those without. It is highly likely we will soon see a coding-divide between those who understand machine intelligence and those who do not. We have to be careful that this does not evolve into an intelligence divide. Wing Commander Hallen is a serving RAAF officer with a background in maritime patrol operations. He is a graduate of the USAF School of Advanced Air and Space Studies. He is currently based in Washington DC.The opinions expressed are his alone and do not reflect the opinion of the Royal Australian Air Force, the Department of Defence, or the Australian Government. #artificialintelligence #StephenBaxter #ScienceFiction #TerryPratchett #AI #WingCommanderTravisHallen
- Call for Submissions: #Selfsustain and High-Intensity Operations – Editorial
On 11 April 2019, the Sir Richard Williams Foundation is holding a seminar examining high-intensity operations and sustaining self-reliance. The aim of the seminar, building on previous seminars and series looking at #jointstrike and #highintensitywar, is to establish a common understanding of the importance and challenges of sustaining a self-reliant Australian Defence Force in a challenging environment. In support of the seminar, The Central Blue will run a #selfsustain series to generate discussion and enable those that cannot attend the ability to gain a perspective on the topic. Do you have thoughts on what #selfsustain means for Australia and its region? We want to hear from you! Australia’s pursuit of self-reliant defence has always posed a number of challenges. However, competition – both healthy and unhealthy – in the Indo-Pacific is accelerating and intensifying, posing new tests and presenting new opportunities for the concept of self-reliance. Further, the increased sophistication and interdependencies of Australia’s defence capabilities have made self-reliant operations and sustainment more complex. The Williams Foundation seminar in April anticipates these challenges by focusing on the impact of high-intensity operations on self-reliance. A more challenging environment demands deeper thinking and explication of what self-reliance means for Australia’s defence. Two principles appear apparent. Firstly, self-reliance must be sustainable if it is to be credible and, secondly, self-reliant sustainment must be coordinated across public and private sectors as well as with partner nations. Beyond these two principles, however, greater clarity is needed concerning the breadth and depth of sustainable self-reliance in Australian defence policy and the goals that it seeks to achieve. Informed by clearer objectives, Australia’s self-reliance priorities must be evaluated in aggregate such that resourcing decisions can be informed by their overall impact on Australia’s freedom of action as well as their benefits for specific sectors. However, this aggregate picture is difficult to grasp when self-reliance can range from huge infrastructure projects, such as supporting the construction of new submarines, to small grants encouraging new research and development in Australian universities, through to the development of new operational logistics concepts that capitalise on emerging manufacturing techniques. The #selfsustain series coordinated through The Central Blue, as well as the seminar, will seek to explore these issues thoroughly. Definitive answers are unlikely – but perhaps a better idea of the critical questions that must be explored will begin to emerge. We welcome contributions leading up to the seminar to help shape the discussion, but we are also keen to read about how the seminar shaped attendees’ thinking after the event. This series will endure throughout 2019 because, as our friends at Logistics in War have shown, discussions on these questions can indeed #selfsustain. We encourage submissions from students, academics, policymakers, service personnel of all ranks, industry, and from others with interest in these issues. To help get you started, we pose the following topic suggestions: What key insights regarding sustainable self-reliance can be drawn from previous conflicts and operations? What are the impacts of Australia’s geography for sustainable self-reliance? What role do domestic industry and commercial enterprise play in self-reliance? What aspects of Australian Defence Force capabilities and operations should be priorities for sustainable self-reliance? What roles should sustainment and enabling of partners play in Australian concepts of self-reliance? In what areas can sustainable Australian self-reliance best contribute to partner relationships? Is mutual or collective self-reliance within an alliance possible? How do emerging technologies potentially enable or disrupt sustainable self-reliance in Australian? How does the introduction of advanced technology systems affect self-reliance? What are the unique challenges of sustainable self-reliance in a knowledge economy and for Information Age warfare? What workforce challenges does self-reliance pose? In what areas can sustainable Australian self-reliance best contribute to partner relationships? We hope these suggestions provide some food for thought and prompt some discussion. We would love to hear your ideas on what issues should be explored as part of the #selfsustain series. If you think you have a question or an idea that would add to the discussion or know someone who might contact us at thecentralblue@gmail.com. #futureconcepts #TheWilliamsFoundation #futurewarfare #SelfSustainment #Seminar #CallforSubmissions
- #SciFi, #AI, and the Future of War: Trusted – Marija Jovanovich
Marija Jovanovich joins our #SciFi, #AI, and the Future of War series with a short, and very human, story on the ups and downs of future technologies. It wasn’t meant to be like this. I’m sitting in this boring beige room, have been for what seems like hours. I don’t know what time they left, I don’t know when they are coming back. Or if. Everything is a little hazy. I’m used to perfect clarity – of sensation, of perception, of recall – so this haziness is particularly annoying. Is this what non-augments are like all the time? All I can think is that it wasn’t meant to be like this… But to distract myself from the hazy boredom, I’m going to tell you why I am here. When I first joined the military, AI was THE buzzword. While most of the world was pontificating the ethics of the concept and fixated on the dangers of strong AI – I swear Western popular culture never got over Skynet, thanks James Cameron – the military was more pragmatic and focused on augmented intelligence. Initially, the augmentation was external. Devices you could initially carry, then wear, with ever-improving interfaces, that helped the operator in the field make the right decision. Fact is, machines are really good at things humans are not, and vice versa. I certainly don’t want to keep databases of largely useless facts in my head, when I can wear them on my head. Instantly searchable, infinitely detailed, leaving plenty of brain space free for the more important stuff. Around the time I finished my first operational flying tour – chasing submarines on the mighty P-8 Poseidon – the first cognitive augmentation implants were getting around in early field experiments. The initial attempts were largely look-up only and really simple. Too simple. Hardly worth the effort. A non-augment with a decent memory could beat them. I was a mildly interested observer if only to indulge my scientific predilections. Then things started to get interesting, I think it was about 2031… Scratch that, I know it was. It was the year that I was thinking about what to do next. Operational flying and flight test had been fun, but I was starting to get bored. I remember a day way back at Test Pilot School, we were running simulations to study the evolution of fighter aircraft through the generations. While my classmates were obsessed with sensor porn, for me the biggest conceptual difference between 3rd and 4th generation fighters was the introduction of the master mode switches. A process that in the F-4 required a dozen switches to be thrown all over the cockpit – and two people to throw them – was a single switch selection even in the early variants of the F-15. That’s what we in the flight test world call an ‘enhancing feature’. Well, the capability of the cognitive augmentation implants in 2031 was approaching master mode switch status in terms of being a game changer. Everyone wanted in on that game, and I was no different. The military had early access to the technology. The adventure of it all convinced me to stay. Forever the early adopter, I got my implant in 2033. They were looking for proven operators with a couple of tours behind them, who were neurotypical except for off-the-chart psychometric scores. I guess that narrowed the field somewhat. My parents freaked out about the surgery. Realistically, I’ve had more serious ankle sprains. It didn’t even require general anaesthesia; they did it under sedation. Recovery time: 30 minutes, and that only to cover off on sedation side effects. I was so excited I didn’t feel the nausea. I can still feel the small scar behind my right ear. They told us that the implant would learn from and with us. It would take about six months to start being useful, professionally speaking, but we’d notice changes sooner. The first thing I noticed was increased rate of data uptake. After about six weeks, I started absorbing new information like a sponge. And the more I got, the more I wanted. I’d always been the type to read the back of the cereal box at the breakfast table; now, my hunger for information was insatiable. Then it was long-term memory recall. At nine-ish weeks, I suddenly started dragging useless facts out of the dark recesses of my brain with consummate ease. My sister’s second-grade teacher’s kid’s name? Who won the 200m butterfly in Athens in 2004? It was ALL coming back to me. All those changes were expected. What I found surprising was the rapid improvement in complex cognitive functions, like judgement. The psychs ran us through biweekly Situational Judgement Tests; the learning curve was impressive. We got so good so quickly that the psychs quit using SJTs by week 10 – they could no longer make them complex enough – and started testing us using VR simulations. The massive improvements in our performance as operators are so well documented that there’s no point in rehashing them. But it wasn’t all work and no play. I swear I even got funnier – now that says something! The difference between augments and non-augments was obvious within a few months. And it kept getting better, and better, and better. And then… Allow me to digress. Long before the augmentation implant revolution of the early 30s, the Western militaries went through an evolution to what they called 5th generation capability. It all seems pretty noddy now – networking, low-level space exploitation, basic low observable tech – but it was a big deal at the time. Sure, we all have to grow up sometime. One of the show-pieces of this 5th generation transformation was the F-35 Joint Strike Fighter. It was designed as a jack-of-all-trades combat aircraft, both in terms of roles it would perform and who would operate it. A complex multi-national cooperative program, with all the intricacies inherent in such an arrangement, bubbled away for a couple of decades to birth the JSF. I remember talking to a USAF cybersecurity expert about the F-35, long before I got the implant. He talked at length about the microkernel design of the operating system. Mathematically proven to be impregnable, he said. And then they went and saved pennies by making the chips off-shore, he said, shaking his head. The only way to get a truly cyber-secure system is to build the software from the kernel up and put it on chips made in trusted foundries. Expect a back door in every JSF chip. I still remember being struck by his use of the word ‘trusted’ to describe foundries. You trust people, on account of their character and integrity. How do you trust an inanimate object like a factory? The military was well aware of the cybersecurity risks. But the thing is, it was no secret even for the public. There was a book called Ghost Fleet that came out when I was in high school, which used the ‘compromised chip’ problem in the JSF as a plot feature. I seem to remember that the military establishment referred to Ghost Fleet as ‘useful fiction’ at the time. I really wish that someone actually put what it said to use, especially now. Of course, by the time the compromised chip risk was fully realised during the Natuna Islands Emergency in 2035, it was too late to change things – on the JSF, or in us. Cognitive augmentation implants were revolutionised in Silicon Valley, almost exclusively by start-ups who guarded their tech with extreme prejudice. Interestingly, the big gun runner companies didn’t really get involved, except as backers. I guess the profits were small-fry compared to what they were making from more conventional war machines. I know that my implant was made by a company called Ad Infinitum. I know that it was designed and tested in the US, but I don’t know where and how it was actually produced. A bit like the iPhone – proudly designed in California, built by the lowest bidder. Or like the JSF – impregnable software, on chips made in off-shore factories, to save pennies. To tell you the truth, even with everything I knew, I didn’t think about that until 2035. None of us did. Until then, it was no more than a conspiracy theory. But when we saw the coordinated cyber attacks on the JSF fleet, during its first real test against a near-peer adversary, and realised that the vulnerability stemmed back to those penny-pinching, back-door-hiding, compromised chips, we knew we were in trouble. The first suspicions of hacked augmentation implants started straight after. The provenance of the implants was investigated, but the companies just shuttered up. Suffice to say, they were not made in trusted foundries, so there was plenty of reason to expect a back door in everyone. And here is where it gets interesting. When a device has been part of your brain for three years, which bit is you and which the device? Is an errant thought, a mixed metaphor, an illogical decision just that, or is it a hacked implant? Is the main risk of a hacked implant decreased cognitive ability or a legitimate security threat? How do you, or anyone else, tell the difference? How do you know, or anyone else, know who can be trusted? So now I wait. I’m not sure for what. The people working all this out are non-augments by decree of high command. Even in my hazy state, I can out-think them all. But I am no longer allowed to. Per Ardua Ad Nihil. Wing Command Marija ‘Maz’ Jovanovich is a Royal Australian Air Force aviator. While her formal education is in science and engineering, she also dabbles in history, languages, and – increasingly – writing. She is currently serving as the Executive Officer of No. 92 Wing. The views expressed are hers alone and do not reflect the opinion of the Royal Australian Air Force, the Department of Defence, or the Australian Government. #artificialintelligence #ScienceFiction #ShortStory #5thGenerationAirPower #AI #Fiction
















