InfoFireMultimedia

Commercializing AI: A Fireside Chat with Mark Maybury

Commercializing AI: A Fireside Chat with Mark Maybury

Shalini Urs

AI: The new Epoch

Artificial Intelligence (AI) has undergone a revolution in recent years. After modest progress coupled with some skepticism in the last couple of decades, AI has taken center stage. Humongous data sets, deep learning algorithms, and models have helped AI’s huge strides in solving real-world problems like recognizing and generating text, voice- and image-recognition, and self-driving cars. Authors Henry Kissinger, Eric Schmidt, and Daniel Huttenlocher note in their book The Age of AI: And Our Human Future that AI marks “a new epoch” since it ends the Enlightenment’s ethos that placed humans at the center of all that is knowable (displacing God), putting in their stead machines with superior intelligence. 

ChatGPT: Commercializing LLMs

Unless you have been living under a rock for a while, you must have heard of ChatGPT. Touted as a game-changer, OpenAI’s ChatGPT (Chat Generative Pre-trained Transformer) is the talk of the town. Built on “GPT-3.5,” an upgraded version of OpenAI’s GPT-3 family of large language models (LLMs), launched in November 2022, it has taken the world by storm. Free to use and easily accessible, ChatGPT is the best AI chatbot ever released and has brought LLMs to a mass audience. Millions are using it, and the result has been an explosion of fun and sometimes frightening writing experiments that have turbocharged the growing excitement and consternation about these tools, as this Nature editorial warns us. LLMs have made tremendous strides with the “GPT-3.5,” relative to earlier ones beset with problems, as Ricardo Baeza-Yates noted, such as “fisheye views” due to the “self-attention mechanism” used. The challenges with learning patterns are due to the perils of scholastic parroting (Bender, Gebru, and McMillan-Major,2021). ChatGPT is inspiring awe, fear, stunts, and attempts to circumvent its guardrails. Given that conversational AI is likely to revolutionize research practices and publishing, creating both opportunities and concerns, Van Dis et al. (2022) list five priorities for research: holding on to human verification; rules for accountability; truly open LLMs; embracing the benefits; widening the debate.

ChatGPT is the finest exemplar of commercializing AI. LLMs are now mainstream and in the hands of millions of people and the marketplace. Along with hype and chatter, ChatGPT has also intensified the AI research race highlighting the nexus and interdependencies between academia and industry in their commercialization efforts.

Mark Maybury on Commercializing AI

In this episode of InfoFire, listen to Mark Maybury, a C-Suite Executive and Board Member with leadership success across public and private sectors with expertise in executive management, innovation commercialization, venture investment, defense and intelligence, cybersecurity, and AI/ML.

Dr. Maybury is Vice President, Commercialization, Engineering & Technology for Lockheed Martin. He serves as a Special Government Employee for the Defense Science Board, providing strategy and technology advice to the Office of the Secretary of Defense. He was Stanley Black & Decker’s (SBD) first Chief Technology Officer. Mark spent 27 years (1990 to 2017) at the MITRE Corporation, including as VP of Intelligence Portfolios and Director of the NIST-sponsored National Cybersecurity FFRDC (NCF) supporting the National Cyber Center of Excellence (NCCoE). He also served as VP and CSO, and CTO of MITRE.

Mark brings unique perspectives to the table with his vast and diverse experience across the public and private sectors. Our conversation centered around a range of issues: public vs. private sector; science fiction and innovation; robotics and AI in manufacturing and healthcare; extreme innovation, amongst others.

R&D in Public and Private Sectors: Return on Investment

A country’s or company’s innovative capacity is driven by its investments in R&D. Most countries and companies invest in R&D for two primary reasons: first, technological innovations boost economic growth directly; and second, they generate positive externalities in the form of knowledge creation (Hasan & Tucci, 2010)

The global investment in R&D is staggering. In 2019 alone, organizations worldwide spent $2.3 trillion on R&D—the equivalent of roughly 2 percent of global GDP—about half of which came from industry and the remainder from governments and academic institutions. Since 2000, total global R&D expenditures have tripled in current dollars, from $675 billion to $2.4 trillion in 2020). Global research and development (R&D) performance is concentrated in a few countries, with the United States performing the most (27% of global R&D in 2019), followed by China (22%), Japan (7%), Germany (6%), and South Korea (4%). Other top-performing countries—for example, France, India, and the United Kingdom—account for about 2% to 3% of each. Countries and companies hope their R&D investments yield the critical technology to develop new products, services, and business models.

Responding to my question on the difference between the public and private sectors, Mark Maybury opines that while purposes differ, the objectives are the same. At MITRE—a not-for-profit public corporation where Mark spent 27 years, the purpose was public good. For example, enhancing safety and security and saving lives, while in private corporates, it is for profit. He shares his experience of how he got the directive to commercialize innovations. So they licensed a whole bunch of technologies and spun up five companies, one of which became a unicorn, a billion-dollar-plus cybersecurity company.

The complementary relationship between public and private science is complex and dynamic. Many studies have identified the complementary association between public and private R&D, suggesting that public or government-financed R&D (subsidies, tax credits or grants, university R&D) stimulate R&D in the private sector (Coccia, 2010). Arora et al. (2018) document large corporations’ shift away from science between 1980 and 2006. Ashish Arora and others continue to investigate these complex and evolving relationships between public and private investments in R&D, innovation, competitiveness, and their impact on productivity and economic growth. They note that the private value from the use of a public good, like scientific knowledge, changes as the supply of the public good expands, and how the private value of the public good differs across firms is an important but understudied topic.

—While ChatGPT has given us a peek into the future of Chatbots or conversational AI, and the conversations around the power and pitfalls of AI continue to dominate, applications of AI also continue to expand exponentially. —

What is Commercialization?

Put simply, commercialization is about putting knowledge/technology to use. Commercializing is sometimes confused or equated with sales, marketing, and business development and is often used pejoratively. On the contrary, commercialization is the process of introducing new products or services or production processes into the market, especially mass markets. It exemplifies a move from the laboratory to the market—the final stage of the innovation life cycle. Most technologies begin in R&D laboratories before moving to the new product development stage and market launch, along with the attendant advertising, marketing, and sales processes. R&D Laboratories, whether in universities or corporations, strive to make a positive impact through new knowledge. The capability to move research from the lab to the marketplace is critical for its positive effects. Thus, commercialization is the process of putting new knowledge to use by placing new products and services in the hands of the masses. Most public and private organizations strive to build commercialization capabilities.

Mark opines that universities are core in research in STEM (Science, Technology, Engineering, and Math). The critical challenge is to translate that research into valuable innovations in industries. Therefore it is essential to establish policies that help accelerate commercialization and take the research to industry. He says it is all about creating the ecosystem—the academia, the industry, government policies that catalyze collaboration with academia and nonprofits, and small to medium enterprises. These are the ways to bring innovations to market.

Routes of AI Commercialization

Acknowledging the transformational potential of AI to increase productivity and create long-term economic growth, the UK government commissioned a study to understand how AI R&D commercialization can be supported. First, the study identifies the most prevalent ways, or “routes,” by which AI R&D is commercialized in the UK: University spinouts; Startups; Large firms that commercialize AI R&D; Direct hire, and joint tenure arrangements. Further, a taxonomy of these routes is developed as follows: Direct Commercialization, Knowledge Exchange, and Formal and De Facto IP and Standards. Finally, the report explores the main enablers, barriers, and challenges for AI commercialization through the specific routes above and across the whole commercialization process. One key finding of this study is that the commercialization of AI R&D depends on the availability of sector-specific data. The UK Biobank and NHS genomics dataset are good examples of such datasets, and also the digitalization of existing data can make AI commercialization possible.

As Sandeep Pandya observes, while outlining the challenges of AI commercialization from a provider’s perspective—it is all about data. AI systems are fed solely on data, meaning that you need significant amounts of data consistently to operate at total capacity. It is an endless feedback loop of data collection, building AI models, measuring, and learning. While it is exciting to think of the opportunities AI brings to so many different industries as we enter this era of AI commercialization, understanding the unique challenges that come with AI commercialization is equally essential, Pandya says.

Categories of AI: Weak, Strong, and Super Intelligence

Artificial Intelligence is categorized into three kinds/levels: Artificial Narrow Intelligence (ANI)/Weak Al, Artificial General Intelligence (Strong AI), and Artificial Super Intelligence. ANI is designed to perform a single task, and any knowledge gained from completing that task will not automatically be applied to other tasks. Artificial General Intelligence (AGI) is when machines can mimic human intelligence and/or behaviors, with the ability to learn and apply its intelligence to solve any problem. AGI can think, understand, and act in a way that is indistinguishable from that of a human in any given situation. While AGI seeks to mimic complex thought processes, narrow AI is designed to complete a single task without human assistance. AGI Systems are expected to be able to reason, solve problems, make judgments under uncertainty, plan, learn, and integrate prior knowledge in decision-making. AGI systems will be innovative, imaginative, and creative. Most researchers and experts concur that we are currently still in the ANI stage. Examples of ANI include chatbots, autonomous vehicles, Siri and Alexa, and recommendation engines. Self-driving car technology is also considered ANI or coordination of several narrow AIs. The current rage ChatGPT is also considered ANI, albeit powerful and fanciful.

Touted as the future of AI, Artificial Super Intelligence (ASI) is expected to perform extraordinarily and surpass humans at everything—arts, decision-making, and emotional relationships. In 1998, Philosopher Nick Bostrom at the University of Oxford defined ”super Intelligence” as an intellect much smarter than the best human brains in practically every field, including scientific creativity, general wisdom, and social skills. In his more recent book, titled Superintelligence: Paths, Dangers, Strategies, a New York Times bestseller, Nick Bostrom, known for his work on existential risk, lucidly wrote about this profoundly crucial existential question taking us through a fascinating terrain of topics and contemplations about the human condition, the future of intelligent life, and reconceptualization of the essential task of our time. In the 2019 book, Novacene: The Coming Age of Hyperintelligence, James Lovelock, the originator of the Gaia theory, argues that the Anthropocene—the age in which humans acquired planetary-scale technologies has come to an end with the emergence of a new age of Novacene. In the Novacene, new beings will emerge from existing artificial intelligence systems, think 10,000 times faster, and regard us as we now regard plants.

AI Applications in diverse industries

Fueled by AI research, commercialization has spawned several applications. Over the last six decades, AI research experienced several ups and downs, including some chilly AI winters in the 1990s due to a decline in funding. Today AI is back in the limelight as it gets closer to mimicking humans due to advances in deep learning neural networks with many hidden layers and the availability of big data in real-time.

While ChatGPT has given us a peek into the future of Chatbots or conversational AI, and the conversations around the power and pitfalls of AI continue to dominate, applications of AI also continue to expand exponentially. Different people categorize the areas of application differently. From a commercial perspective, AI is categorized into the following areas: AI in healthcare, AI in education, AI in the financial services industry: AI in media and eCommerce, AI in robotics, and AI in agriculture. Each one of these domains has been transformed by commercializing AI. While some, such as self-driving cars and robots in diverse domains, such as manufacturing and surgery, have caught the attention and imagination of the public, many applications have seeped into our lives knowingly and unknowingly.

Robotics and AI in Manufacturing

Today industrialization is synonymous with robotics and AI adoption. AI is at the heart of industries’ increased productivity and enhanced performance and the evolution of Industry 4.0. AI and robotics are considered essential for manufacturing industries as they help in diverse ways, including Automatic Control; Damage Control and Quick Maintenance; and Demand-based production. Major companies such as Bosch, Intel, Microsoft, General Electric, Siemens, etc., use this technology differently. For example, the Bosch Centre for Artificial Intelligence, established in 2017, is working on the development of six areas using core AI technology: AI-based Dynamics Modeling; Rich explainable Deep learning; Large-scale AI and Deep learning; Environment understanding and Decision making; Control Optimization through Reinforcement learning; and Dynamic Multi-agent Planning.

Mark Maybury believes that contrary to the conventional view that robots rob jobs, they increase the demand for people. Quoting McKinsey’s 2017 report evidencing that technology has created large employment and sector shifts but also creates new jobs, Mark referred to the data of how personal computers created more than 19263 thousand jobs while destroying more than 3508 thousand, with a net increase of 15755 thousand jobs. He also cites the Centre for Economic Policy Research Report (2019). The data demonstrate that firms that adopted robots between 1990 and 1998 (“robot adopters”) increased the number of jobs by more than 50% between 1998 and 2016, while firms that did not adopt robots (“non-adopters”) reduced the number of jobs by more than 20% over the same period. Graetz and Michaels (2018) find that robot densification increases total factor productivity and wages, brings down output prices, and has no significant relationship between the increased use of industrial robots and overall employment.

According to Mark, the projections for 2028 are that we will probably have about two and a half million unfilled jobs in manufacturing. That is about a 2.5 trillion loss of economic value. So there is a huge risk because we do not have enough people. What is the answer? The answer is automation. The answer is AI. I want robotics, not because I want to replace all the people, but I want to upskill the people, says Mark.

AI in Healthcare

The enormous datasets in healthcare environments are being put to work, leading to AI and machine learning methods to help improve healthcare. Along with the rise of data availability (from patient records to genome sequencing), the potential of AI in healthcare and life sciences has been rising through R&D in academia and industry. Several types of AI are already being deployed—from diagnosis, treatment recommendations, and surgical assistance. Three critical areas of AI applications are AI-led drug discovery, clinical trials, and patient care. Medical AI companies develop systems that assist patients at every level. Patients’ medical data is also analyzed, offering insights to help improve quality of life. Stevens (2022) traces the trajectory of expert systems beginning with a collaboration between Edward Feigenbaum and the geneticist Joshua Lederberg, Nobel Laureate in Medicine, with which AI became deeply connected to the life sciences. The computer systems and software that Feigenbaum’s lab helped to develop played an essential role in establishing the possibility of these kinds of work. As Cindy Gorden says, AI in healthcare is making our world healthier by modernizing risk stratification, medication adherence (behavior) analytics/predictions, disease propensity predictive analytics, intelligent medicine dispensing, and others, including surgical robots and cobots.

Mark Maybury, SBD’s first Chief Technology Officer, discusses the humungous potential of commercializing AI in healthcare. SBD built an extensive pipeline of ecosystems and invested in corporate venture capital in companies that would potentially realize new, innovative solutions. He cites the example of Foresite, which provides patient care and eldercare consisting of the actual sensors and makes use of a variety of inputs, including depth-sensor technology, under-mattress pads, and motion detectors to continually capture a range of information, such as respiratory rate, bed restlessness, gait, motion, and activity. For example, based on gait analysis, the platform could predict whether someone is going to trip or fall over in a period of time. While preserving privacy, we want to encourage people to live safely at home, said Mark.

Extreme/Radical Innovation

In her book titled Extreme Innovation:3 Superpowers for Purpose and Profit, Sandy Carter identifies “speed, intelligence, and synergy” as the three best practices for extreme innovation to drive profit and purpose.

Mark defines extreme innovation as innovation of everything, innovation everywhere, and innovation by everyone.

In this episode of InfoFire, he further elaborates and says that radical innovation is something first of a kind the world has never seen. Quoting the famous quote by Arthur Clarke, “Any sufficiently advanced technology is indistinguishable from magic,” Mark pronounces that it should be magical. Usually, it is patentable as well. Mark, who has a master’s degree in computer speech and language processing and a doctoral degree in artificial intelligence from Cambridge University, talks about an invention that he and his team at MITRE, working with the US Intelligence community, developed and patented for analyzing news in the mid-1990s, which is still not commercialized: “Doing sentiment analysis of news (both newspapers and the Internet), we invented a first-of-a-kind technology that is yet to be commercialized. That is how advanced it was. So we created a commercial company for law enforcement and intelligence called Pixel Forensics, which uses AI to analyze speech and visual images.”

Finally, Mark Maybury shares his recent experience with extreme innovation at SBD when he was challenged to develop a technology to handle the COVID-19 virus. SBD collaborated with Ford and 3M to design a new Powered Air-Purifying Respirator (PAPR). This new portable respirator includes a hood, face shield, and a high-efficiency (HEPA) filter system that provides clean air for up to 8 hours.

Commercializing AI: From Homo Sapiens to Homo Deus?

The roots of AI may be traced back to Aristotle and other philosophers, such as René Descartes, who established the separation of the body from the mind, which forms the basis of the methodology of AI – mental processes have an independent existence and follow their own laws (Ray Barua, 2019). However, the origin of the modern idea of AI is traced back to the 1950s when scientists focused on machine translation of logical reasoning. AI was founded as a field by John McCarthy of Stanford University in 1956, who organized the famous Dartmouth conference and believed that computer systems would evolve intelligence of human order. Expert Systems were the earliest AI commercialization efforts, and the emergence of ChatGPT demonstrates that they continue to be the best exemplars.

Today AI is all about data, data collection and aggregation, algorithms, processing power, pattern recognition, machine learning, deep learning methods, models, and modeling. AI is at the threshold of going mainstream as a new growth engine for business and society. AI is powerful enough to address most of the challenges humanity confronts today—from poverty alleviation to world peace, transitioning from Homo Sapiens to Homo Deus. In his best-selling book Homo Deus, Yuval Noah Harari, author of the bestselling Sapiens: A Brief History of Humankind, envisions a not-too-distant world in which we face a new set of challenges. Homo Deus explores the ideas, dreams, and dread shaping the twenty-first century—from overcoming death to creating artificial intelligence. Harari asks the fundamental question: Where do we go from here? Furthermore, how will we protect this fragile world from our destructive powers? This is the next stage of evolution. He says this is Homo Deus.

Mark prefers the term Augmented Intelligence to Artificial Intelligence and believes that as scientists, we have a responsibility to be ethical and responsible, and it is about augmenting human intelligence with AI for a safe, secure, and better world.

References

Arora, A., Belenzon, S., & Patacconi, A. (2018). The decline of science in corporate R&D. Strategic Management Journal, 39(1), 3-32.

Arora, A., Belenzon, S., Kosenko, K., Suh, J., & Yafeh, Y. (2021). The rise of scientific research in corporate america (No. w29260). National Bureau of Economic Research.

Bender, E. M., Gebru, T., McMillan-Major, A., & Shmitchell, S. (2021, March). On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?🦜. In Proceedings of the 2021 ACM conference on fairness, accountability, and transparency (pp. 610–623).

Bostrom N. (2014). Superintelligence: paths dangers strategies (First). Oxford University Press.

Carter, S. (2017). Extreme Innovation: 3 Superpowers for Purpose and Profit. Param Media.

Coccia, M. (2010). Public and private R&D investments as complementary inputs for productivity growth. International Journal of Technology, Policy and Management10(1-2), 73-91.

Graetz, G., & Michaels, G. (2018). Robots at work. Review of Economics and Statistics, 100(5), 753-768.

Harari Y. N., Purcell J. & Watzman H. (2018). Sapiens: a brief history of humankind (First Harper Perennial). Harper Perennial.

Harari Y. N. (2016). Homo Deus: a brief history of tomorrow. Harvill Secker.  

Hasan, I., & Tucci, C. L. (2010). The innovation–economic growth nexus: Global evidence. Research policy39(10), 1264-1276.                                 

Kissinger, H. A., Schmidt, E., & Huttenlocher, D. (2021). The age of AI: and our human future. Hachette UK.

Lovelock J. & Appleyard B. (2019). Novacene : the coming age of hyperintelligence. Allen Lane an imprint of Penguin Books.

Ray Barua, S. (2019). A Strategic Perspective on the Commercialization of Artificial Intelligence: A socio-technical analysis (Doctoral dissertation, Massachusetts Institute of Technology).

Rehman, N. U., Hysa, E., & Mao, X. (2020). Does public R&D complement or crowd-out private R&D in the pre and post-economic crisis of 2008? Journal of Applied Economics23(1), 349-371.

Stevens, H. (2021). The Business Machine in Biology—The Commercialization of AI in the Life Sciences. IEEE Annals of the History of Computing, 44(1), 8-19.

Urs, S. (2022, May 4). The power and the pitfalls of large language models: A fireside chat with Ricardo Baeza-Yates. Information Matters, Vol. 2, Issue 5.

Cite this article in APA as: Urs, S. (2023, February 7). Commercializing AI: A fireside chat with Mark Maybury. Information Matters, Vol. 3, Issue 2. https://informationmatters.org/2023/02/commercializing-ai-a-fireside-chat-with-mark-maybury/

Shalini Urs

Dr. Shalini Urs is an information scientist with a 360-degree view of information and has researched issues ranging from the theoretical foundations of information sciences to Informatics. She is an institution builder whose brainchild is the MYRA School of Business (www.myra.ac.in), founded in 2012. She also founded the International School of Information Management (www.isim.ac.in), the first Information School in India, as an autonomous constituent unit of the University of Mysore in 2005 with grants from the Ford Foundation and Informatics India Limited. She is currently involved with Gooru India Foundation as a Board member (https://gooru.org/about/team) and is actively involved in implementing Gooru’s Learning Navigator platform across schools. She is professor emerita at the Department of Library and Information Science of the University of Mysore, India. She conceptualized and developed the Vidyanidhi Digital Library and eScholarship portal in 2000 with funding from the Government of India, which became a national initiative with further funding from the Ford Foundation in 2002.