How Artificial Intelligence is transforming the world?

Updated 7 years ago on May 14, 2018

Most people are not very familiar with the concept of artificial intelligence (AI). As an example, when 1,500 senior executives at U.S. companies were asked about artificial intelligence in 2017, only 17% said they were familiar with it. Some of them didn't know what it was or how it would impact their specific companies. They understood that there was significant potential for business process change, but didn't know how AI could be implemented in their own organizations.

Despite its widespread obscurity, AI is a technology that is transforming all areas of life. It is a large-scale tool that is enabling people to rethink how we integrate information, analyze data, and use insights to make better decisions. Through this comprehensive overview, we hope to explain AI to an audience of policymakers, opinion leaders, and interested observers, and to demonstrate how AI is already changing the world and raising important issues for society, economics, and governance.

In this article, we review emerging applications in finance, homeland security, healthcare, criminal justice, transportation, and smart cities, and address issues such as data access challenges, algorithm bias, AI ethics and transparency, and legal liability for AI decisions. We compare regulatory approaches in the United States and the European Union and conclude with a series of recommendations for maximizing the benefits of AI while protecting important human values.

To maximize the benefits of AI, we recommend nine steps to move forward:

  • Increasing researchers' access to data without compromising users' privacy,
  • invest more public funds in unclassified AI research,
  • Promote new models of digital education and training in artificial intelligence so that workers have the skills needed in the 21st century economy,
  • Create a federal AI advisory committee to make policy recommendations,
  • Engage with state and local officials to ensure they are implementing effective policies,
  • regulate broad AI principles rather than specific algorithms,
  • Take complaints of bias seriously so that AI does not reproduce historical injustice, unfairness, or discrimination in data or algorithms,
  • maintain human oversight and control mechanisms, and
  • punish malicious AI behavior and promote cybersecurity.

Properties of Artificial Intelligence

Although there is no single definition, it is generally accepted that AI are "machines that respond to stimulation in a manner consistent with traditional human responses, given their capacity for thought, judgment, and intention." According to researchers Shubhendu and Vijay, these software systems "make decisions that would normally require human expertise" and help people anticipate problems or solve them as they arise. In this way, they act intentionally, intelligently and adaptively.

Intentionalism

Artificial intelligence algorithms are designed to make decisions, often using real-time data. They are not like passive machines, capable only of mechanical or predetermined responses. Using sensors, digital data, or remote inputs, they combine information from a variety of sources, analyze the material instantly, and act on the conclusions drawn from that data. With significant improvements in data storage, processing speed, and analysis techniques, they are capable of incredibly sophisticated analysis and decision-making.

Artificial Intelligence is already changing the world and posing important questions for society, economics and governance.

Reconnaissance

AI is typically used in conjunction with machine learning and data analytics. Machine learning uses data and looks for underlying trends. If it discovers something that is relevant to a practical problem, software developers can take that knowledge and use it to analyze specific questions. To do this, all it needs is enough reliable data to allow algorithms to identify useful patterns. Data can be in the form of digital information, satellite imagery, visual information, text, or unstructured data.

Adaptability

Artificial intelligence systems are capable of learning and adapting when making decisions. For example, in the transportation field, semi-autonomous cars are equipped with tools that allow drivers and vehicles to learn about upcoming congestion, potholes, highway construction, and other possible obstacles on the road. The vehicles can utilize the experience of other vehicles on the road without human intervention, and all of their accumulated "experience" can be immediately and completely transferred to other vehicles with similar configurations. Advanced algorithms, sensors and cameras take into account the experience of current operations, utilizing dashboards and visual displays to present real-time information so that the driver can make sense of the current traffic situation. And in the case of fully autonomous vehicles, advanced systems can fully control the car or truck and make all navigation decisions.

AI is not a futuristic vision, but something that already exists today and is being integrated and deployed in various industries. This includes areas such as finance, homeland security, healthcare, criminal justice, transportation and smart cities. There are many examples of how AI is already impacting the world and significantly empowering humans.

One of the reasons for the growing role of artificial intelligence is the enormous opportunities for economic development it offers. PriceWaterhouseCoopers estimates that "artificial intelligence technologies could increase global GDP by $15.7 trillion by 2030, an increase of as much as 14%." China's GDP is estimated to be $8.7 trillion in China, $3.7 trillion in North America, $1.8 trillion in Northern Europe, $1.2 trillion in Africa and Oceania, $0.9 trillion in the rest of Asia outside of China, $0.7 trillion in Southern Europe, and $0.5 trillion in Latin America. China is making rapid strides as it has set a national goal of investing $150 billion in AI and becoming a global leader in the field by 2030.

At the same time, a McKinsey Global Institute study conducted in China found that "AI-driven automation could give the Chinese economy a productivity boost that, depending on the speed of implementation, would increase GDP growth by 0.8-1.4 percentage points per year." Although, according to the study's authors, China currently lags behind the US and the UK in AI adoption, the sheer size of its AI market gives the country a huge opportunity for pilot testing and further development.

Finance

Investment in financial AI in the US tripled between 2013 and 2014 to $12.2 billion. According to observers in the sector, "loan decisions are now being made by software that can take into account a host of finely analyzed data about a borrower, not just credit scores and background checks." In addition, there are so-called robo-advisors that "create personalized investment portfolios, eliminating the need for stockbrokers and financial advisors." These developments are designed to eliminate emotions from the investment process and to make decisions based on analytical considerations, and to make this choice in a matter of minutes.

A prime example is stock exchanges, where high-frequency trading by machines has largely replaced human decision-making. People submit buy and sell orders and computers execute them in the blink of an eye without human intervention. Machines are able to detect trading inefficiencies or market differences on a very small scale and make profit-making trades as instructed by investors. In some cases, these advanced computing tools have much greater storage capacity because they use "quantum bits" rather than zero or one, which can store multiple values at each location. This greatly increases memory capacity and reduces data processing time.

Another application of AI in financial systems is fraud detection. In large organizations, it is sometimes difficult to recognize fraudulent activity, but AI can identify anomalies, outliers, or cases that require further investigation. This helps managers identify problems early before they reach dangerous levels.

National security

AI plays a significant role in national defense. Under Project Maven, the U.S. military is using AI "to sift through vast amounts of surveillance data and video and then alert human analysts to patterns or anomalous or suspicious activity." According to Deputy Secretary of Defense Patrick Shanahan, the goal of new technologies in this area is to "meet the needs of our warfighters and increase the speed and flexibility of technology development and procurement."

Artificial intelligence will so accelerate the traditional process of warfare that a new term has emerged: hyperwarfare.

AI-related big data analytics will have a profound impact on intelligence analysis, as massive amounts of data will be sifted out in near real-time - if not real-time - providing commanders and their staffs with unprecedented levels of intelligence analysis and performance. Similarly, command and control issues will be affected as commanders delegate some routine, and in special circumstances, key decisions to AI platforms, significantly reducing decision and follow-up time. Ultimately, warfare is a contest of time, which is usually won by the side that can make a decision faster and move to execution faster. Indeed, artificially intelligent systems associated with AI-based command and control systems could move the decision support process to a speed far beyond that of traditional means of warfare. This process will be so fast, especially when coupled with automated decision-making to launch artificially intelligent autonomous weapons systems capable of lethality, that a new term has been coined specifically to describe the speed of warfare: hyperwar.

While there is an ethical and legal debate in America about whether America will ever wage war with artificially intelligent autonomous kill systems, the Chinese and Russians are not as mired in this debate, and we should anticipate the need to defend against these systems operating at hyperwar speeds. The Western problem of where to position "people in the loop" in a hyperwar scenario will ultimately determine the West's ability to be competitive in this new form of conflict.

Just as artificial intelligence will have a huge impact on the speed of warfare, the proliferation of zero-day or zero-second cyber threats and polymorphic malware will challenge even the most advanced signature-based cyber defenses. This is forcing significant improvements to existing cyber defense systems. Increasingly, vulnerable systems are migrating, so there is a need to move to a layered approach to cybersecurity using cloud-based platforms with cognitive artificial intelligence. This approach enables a shift towards a "thinking" defense system capable of protecting networks by continuously learning from known threats. This capability includes DNA-level analysis of previously unknown code with the ability to recognize and stop incoming malicious code by recognizing the string component of a file. This is how some key U.S. systems stopped the devastating "WannaCry" and "Petya" viruses.

Preparing for hyperwar and protecting critical cyber networks should be a priority as China, Russia, North Korea and others invest significant resources in AI. In 2017, China's State Council published a plan to "create a domestic industry worth nearly $150 billion" by 2030. As an example of the possibilities, Chinese search company Baidu has pioneered a facial recognition app to find missing people. In addition, cities such as Shenzhen are committing up to $1 million to support AI labs. In this country, the hope is that AI will provide security, fight terrorism, and improve speech recognition programs. The dual-purpose nature of many AI algorithms means that AI research focused on one sector of society can be quickly modified for security applications as well.

Health care

AI tools are helping developers increase computational complexity in healthcare. For example, Merantix is a German company that applies deep learning to medical problems. It has developed a medical imaging application that "detects lymph nodes on the human body in computed tomography (CT) images." According to the developers, the key is to label the nodes and identify small lesions or neoplasms that might be a problem. This can be done by a human, but radiologists are paid $100 per hour and can only scrutinize four scans per hour. With 10,000 scans, the cost of this process would be $250,000, which is prohibitively expensive when performed by a human.

In this situation, deep learning allows the computer to be trained on datasets to learn what is a normal looking lymph node and what is an irregularly shaped lymph node. Once this has been done through imaging exercises and labeling accuracy has been practiced, radiologic imaging specialists can apply this knowledge to real patients and determine the risk of cancerous lymph node involvement. Since few of them can give a positive result, it is a matter of identifying an unhealthy and a healthy node.

AI is also being used to treat congestive heart failure, a condition that affects 10% of the elderly and costs the US $35 billion annually. AI tools are useful because they "predict potential problems in advance and allocate resources for patient education, probing and proactive interventions to prevent hospitalization."

Criminal justice

AI is making inroads into the criminal justice field. The City of Chicago has developed an AI-driven "Strategic Target List" that analyzes arrestees for their risk of becoming future criminals. It rates 400,000 people on a scale of 0 to 500, using parameters such as age, criminal activity, victimization, drug arrest records and gang affiliation. After examining the data, analysts found that youth was a strong predictor of violence, being a victim of a shooting was associated with the likelihood of becoming a future criminal, gang affiliation had little predictive value, and drug arrests had no significant association with future criminal activity.

Judicial experts say artificial intelligence programs reduce human bias in law enforcement and lead to a fairer sentencing system. R Street Institute contributor Caleb Watney writes:

Empirically based issues of predictive risk analysis play into the hands of machine learning, automatic reasoning and other forms of AI. Policy simulations using machine learning have concluded that such programs can be used to reduce crime by 24.8% without changing the prison population or to reduce the prison population by 42% without increasing crime.

However, critics fear that AI algorithms represent "a secret system to punish citizens for crimes they have not yet committed. Risk assessments have been used repeatedly to conduct large-scale roundups." The fear is that such tools unfairly target people of color and have not helped Chicago reduce the wave of murders that have plagued it in recent years.

Despite these concerns, other countries are making rapid progress in this area. In China, for example, companies already have "significant resources and access to voices, faces and other biometric data in vast quantities that will help them develop their technology." New technologies can match images and voices with other types of information and use artificial intelligence based on these combined data sets to improve law enforcement and national security. Under the Sharp Eyes program, law enforcement agencies in China are matching video images, social media activity, online purchases, travel records, and personal data in a "police cloud." This integrated database allows authorities to track criminals, potential lawbreakers and terrorists. In other words, China has become the world's leading surveillance state using artificial intelligence.

Transportation

Transportation is an area where AI and machine learning are driving significant innovation. A study by Cameron Curry and Jack Karsten of the Brookings Institution found that more than $80 billion was invested in autonomous vehicle technologies between August 2014 and June 2017. These investments include both autonomous driving applications and the core technologies required for this sector.

Autonomous vehicles - cars, trucks, buses and unmanned delivery systems - utilize advanced technological capabilities. These include automatic steering and braking, lane change systems, the use of cameras and sensors to avoid collisions, the use of artificial intelligence to analyze information in real time, and the use of high-performance computing and deep learning systems to adapt to new conditions using detailed maps.

Light and range detection (LIDAR) and artificial intelligence systems play a key role in navigation and collision avoidance. LIDAR systems combine light and radar devices. They are mounted on top of vehicles and use 360-degree imaging using radar and light beams to measure speed and distance to surrounding objects. Together with sensors on the front, sides and rear of the vehicle, these devices provide information to keep fast-moving cars and trucks in their lane, help them avoid other vehicles, apply brakes and steering when necessary, and do so instantly to avoid a crash.

Modern software allows vehicles to learn from the experiences of other vehicles on the road and adjust their control systems based on weather, road and other conditions. This means that the software plays a key role, not the car or truck itself.

Because these cameras and sensors collect a huge amount of information and must process it instantly to avoid a car in a neighboring lane, autonomous vehicles require high-performance computing, advanced algorithms and deep learning systems to adapt to new scenarios. This means that software, rather than the physical car or truck itself, plays a key role. Advanced software allows vehicles to learn from the experiences of other vehicles on the road and adjust their control systems based on weather, road or other conditions.

Ride-hailing companies are showing great interest in autonomous vehicles. They see benefits in customer service and productivity. All major ride-sharing companies are exploring the use of unmanned vehicles. The rise of ride-sharing and cab companies such as Uber and Lyft in the US, Daimler's Mytaxi and Hailo in the UK, and Didi Chuxing in China demonstrate the potential of this mode of transportation. Uber recently signed an agreement to purchase 24,000 autonomous cars from Volvo for its ride-sharing service.

However, in March 2018, the company suffered a setback when one of its autonomous vehicles hit a pedestrian in Arizona. Uber and several automakers immediately suspended testing and launched an investigation into what went wrong and how the fatal accident could have occurred. Both the industry and consumers want to be assured that the technology is safe and can deliver on its stated promises. Unless conclusive answers are found, this accident could stall the development of AI in the transportation sector.

Smart cities

Metropolitan authorities are using AI to improve the delivery of city services. For example, according to Kevin Desouza, Rashmi Krishnamurthy and Gregory Dawson (Kevin Desouza, Rashmi Krishnamurthy, Gregory Dawson:

The new analytics system recommends to the dispatcher the appropriate response to an ambulance call - whether the patient can be treated locally or needs to be transported to a hospital - based on a number of factors such as the type of call, location, weather and the availability of similar calls.

With Cincinnati receiving 80,000 requests annually, authorities are using the technology to prioritize and determine how best to respond to emergencies. They see artificial intelligence as a way to work with large amounts of data and find effective ways to respond to public inquiries. Instead of solving service problems on an ad hoc basis, authorities are trying to be proactive in providing city services.

Cincinnati is not alone. A number of metropolitan areas are implementing smart city applications that use artificial intelligence to improve service quality, environmental planning, resource management, energy use, crime prevention, etc. Fast Company magazine ranked U.S. localities as part of its Smart Cities Index and found that Seattle, Boston, San Francisco, Washington, D.C. and New York were the most successful. Seattle, for example, has embraced sustainability and is using artificial intelligence to manage energy and resources. Boston has launched a "City Hall To Go" program to make sure low-income populations get the public services they need. It has also installed "cameras and inductive loops for traffic control and acoustic sensors to detect gunshots." San Francisco has 203 buildings certified to LEED sustainability standards.

Thanks to these and other means, megacities are leading the country in implementing AI solutions. For example, according to a report from the National League of Cities, 66% of U.S. cities are investing in smart city technologies. Among the most common applications noted in the report are "smart meters for utilities, smart traffic lights, e-governance applications, Wi-Fi kiosks, and radio frequency identification sensors in pavement."

Policy, regulatory and ethical issues

These examples from different industries demonstrate how AI is transforming many areas of human life. The increasing penetration of AI and autonomous devices into many areas of life is transforming basic operations and decision-making in organizations, improving efficiency and response times.

At the same time, these developments raise important political, regulatory and ethical questions. For example, how to ensure access to data? How do we protect algorithms from biased or unfair data? What ethical principles are implemented in software programming and how transparent should developers' choices be? What about legal liability issues when algorithms cause harm?

The increasing penetration of artificial intelligence in many areas of life is changing the decision-making process in organizations and making them more efficient. At the same time, these developments raise important political, regulatory and ethical issues.

Data access issues

The key to getting the most out of AI is to have a "friendly data ecosystem with common standards and cross-platform sharing." AI depends on data that can be analyzed in real time and used to solve specific problems. Having data that is "available for scrutiny" in the research community is a prerequisite for successful AI development.

According to a study by the McKinsey Global Institute, countries that encourage open data sources and sharing are the most susceptible to the development of AI. In this regard, the US has a significant advantage over China. According to global rankings of data openness, the US ranks eighth in the world, while China ranks 93rd.

However, there is currently no coherent national data strategy in the US. There are no protocols to ensure access to research, nor platforms to generate new knowledge from proprietary data. It is not always clear who owns the data or how much of it is in the public sphere. This uncertainty limits the development of the innovation economy and hinders academic research. In the next section, we look at ways to improve researchers' access to data.

Biases in data and algorithms

In some cases, certain artificial intelligence systems are considered to enable discriminatory or biased practices. For example, Airbnb has been accused of discriminating against racial minorities by homeowners on its platform. A Harvard Business School study found that "Airbnb users with pronounced African-American names were about 16% less likely to be accepted as guests than users with pronounced white names."

Racial issues also arise when using facial recognition software. Most such systems work by comparing a person's face to a series of faces in a large database. As Joy Buolamwini of the Algorithmic Justice League points out, "If your facial recognition database contains mostly Caucasian individuals, that's who your program will be able to recognize." If databases do not have access to diverse data, these programs will perform poorly when trying to recognize African-American or Asian-American facial features.

Many historical datasets reflect traditional values, which may or may not reflect preferences desired in the current system. As Buolamwini notes, such an approach risks repeating the inequalities of the past:

The rise of automation and the increasing reliance on algorithms to make important decisions such as whether or not to get insurance, the likelihood of defaulting on a loan or the risk of reoffending means it's something we need to pay attention to. Even school enrollment decisions are increasingly automated - what school our children will go to and what opportunities they will get. We must not carry the structural inequalities of the past into the future we create.

Ethics and transparency in AI

Algorithms embed ethical considerations and value choices in software decisions. This raises questions about the criteria used in automated decision making. Some people want to better understand how algorithms function and what decisions are made.

In the United States, many urban schools use algorithms to make enrollment decisions based on a variety of considerations, such as parental preferences, neighborhood characteristics, income levels, and demographics. According to Brookings researcher John Valiant, the New Orleans-based Bricolage Academy "prioritizes applicants from economically disadvantaged families for up to 33% of available seats." In practice, however, most cities choose categories that prioritize siblings of current students, children of school staff, and families living in the school's broad geographic area." One would expect school site selection to be quite varied when considerations of this nature come into play.

Depending on how artificial intelligence systems are structured, they can help "screen" mortgage applications, help people discriminate against undesirable people, or select or roster people based on unfair criteria. The considerations that go into making programmatic decisions are important to the operation of the systems and their impact on customers.

For these reasons, the General Data Protection Regulation (GDPR) is being implemented in the EU in May 2018. The rules state that people have "the right to opt out of personally tailored advertising" and "may challenge 'legal or similarly relevant' decisions made by algorithms and call for human intervention" in the form of an explanation of how the algorithm generated a particular result. Each guideline is designed to protect personal data and provide people with information about how the black box works.

Questions arise regarding the legal liability of artificial intelligence systems. In the event of injury or infringement (or death in the case of driverless cars), the operators of the algorithm are likely to be subject to product liability rules. Case law shows that the facts and circumstances of a situation determine liability and affect the type of penalties imposed. These can range from civil fines to imprisonment for major damages. An Uber-related fatality in Arizona will be an important test for determining legal liability. The state has actively engaged Uber in testing its autonomous vehicles and has given the company considerable leeway in conducting road tests. It is not yet clear whether lawsuits will be filed in the case and who will file them: the backup human driver, the state of Arizona, the Phoenix suburb where the accident occurred, Uber, the software developers, or the automaker. Given that many people and organizations were involved in the road test, there are many legal issues to be resolved.

In areas outside of transportation, digital platforms often have limited liability for what happens on their sites. For example, in the case of Airbnb, the company "requires people to agree to give up the right to sue, class action or arbitration in order to use the service." By requiring its users to sacrifice basic rights, the company is limiting consumer protections and therefore limiting people's ability to fight discrimination arising from unfair algorithms. However, the extent to which the principle of net neutrality is applicable across many industries remains to be seen on a widespread basis.

To balance innovation and basic human values, we offer a number of recommendations for advancing AI. These include increasing access to data, increasing public investment in AI, promoting AI workforce development, establishing a federal advisory committee, working with state and local governments to ensure that effective policies are adopted, regulating general goals rather than specific algorithms, taking bias seriously as an AI problem, preserving human control and oversight mechanisms, punishing malicious behavior, and promoting cybersecurity.

Improved data access

The United States must develop a data strategy that promotes innovation and consumer protection. Currently, there are no uniform standards for data access, data sharing and data protection. Almost all data is private and not shared with the broad research community, limiting innovation and systems development. AI needs data to test and improve its ability to learn. Without structured and unstructured datasets, it will be nearly impossible to get the full benefits of AI.

In general, the research community needs greater access to government and commercial data, but with appropriate safeguards to ensure that researchers do not misuse the data, as Cambridge Analytica did with Facebook information. Researchers can gain access to data in a number of ways. One of them is by entering into voluntary agreements with companies that hold sensitive data. For example, Facebook recently announced a partnership with Stanford economist Raj Chetty to use social media data to study inequality. As part of the agreement, researchers had to be vetted and only access data from secure sites to ensure user privacy and security.

The US lacks uniform standards for data access, data sharing, and data protection. Almost all data is proprietary and not made available to a wide range of researchers, which limits innovation and systems development.

Google has long provided aggregated search results for researchers and the general public. On the Trends site, researchers can analyze topics such as interest in Trump, views on democracy, and the economy in general. This helps track the movement of public interest and identify topics of interest to the general public.

Twitter makes most of its tweets available to researchers through application programming interfaces, commonly called APIs. These tools help people outside the company create application software and use data from its social network. They make it possible to study social media communication patterns and see how people comment or react to current events.

In some industries where there is a clear public benefit, governments can facilitate collaboration by creating an infrastructure that enables data sharing. For example, the National Cancer Institute has pioneered a data-sharing protocol whereby certified researchers can query its existing medical data using de-identified information from clinical, claims, and drug information. This allows researchers to evaluate effectiveness and efficiency and make recommendations regarding optimal medical approaches without violating the privacy of individual patients.

There may be public-private partnerships that combine public and commercial data sets to improve system performance. For example, to improve transportation, cities could combine information from ride-sharing services with their own data on social service locations, bus routes, mass transit, and highway congestion. This would help megacities deal with traffic congestion and assist in highway and mass transit planning.

The combination of these approaches will improve access to data for researchers, government and the business community without compromising privacy. As Ian Buck, vice president of NVIDIA, noted, "Data is the fuel that drives the AI engine. The federal government has access to vast sources of data. By opening up access to this data, we can unlock insights that will transform the U.S. economy." Through the Data.gov portal, the federal government has already put more than 230,000 datasets into the public domain, driving innovation and improving AI and data analytics technologies. The private sector also needs to make research data easier to access so that the public can reap the full benefits of artificial intelligence.

Increasing public investment in AI

According to Greg Brockman, co-founder of OpenAI, the U.S. federal government is investing just $1.1 billion in unclassified AI technologies. This is far less than China and other leading countries in this area of research. This shortcoming is noteworthy because the economic returns from AI are very large. To spur economic development and social innovation, the federal government needs to increase investment in artificial intelligence and data analytics. Increased investment is likely to pay off many times over in economic and social benefits.

Promoting digital education and workforce development

As the application of artificial intelligence accelerates across many industries, it is critical that we reimagine our educational institutions for a world in which AI will be ubiquitous and students will need very different training than what they receive now. Currently, many students are not being trained in the skills that will be needed in an AI-dominated environment. For example, there is currently a shortage of data scientists, computer scientists, engineers, coders, and platform developers. If our education system doesn't mold more people with these skills, it will limit the development of AI.

For example, in 2017, the National Science Foundation funded more than 6,500 graduate students in computer science fields and launched several new initiatives aimed at advancing computer science at all levels, from preschool to undergraduate and continuing education. The goal is to build a broader talent base in AI and data analytics so that the United States can take full advantage of the knowledge revolution.

However, the learning process itself also needs to change significantly. In an AI world, not only technical skills are needed, but also skills in critical thinking, collaboration, design, visualization of information, independent thinking, etc. AI will reconfigure society and the economy, and there is a need to think about the "big picture" of what this will mean for ethics, governance, and societal impact. People will need the ability to think broadly across many issues and integrate knowledge from different fields.

One example of new ways to prepare students for a digital future is the IBM Teacher Advisor program, where free online Watson tools help teachers bring the latest knowledge into the classroom. With their help, teachers can develop new lesson plans in STEM and other areas, find relevant instructional videos, and help students get the most out of their classes. In this way, they are harbingers of the new learning environments that need to be created.

Establishment of a federal advisory committee on artificial intelligence

Federal officials need to think about how they will work with artificial intelligence. As noted, there are many issues ranging from the need for greater access to data to addressing bias and discrimination. These and other issues need to be considered in order to take full advantage of this new technology.

To move in this direction, several members of Congress have introduced the Future of Artificial Intelligence Act, a bill to establish broad policy and legal guidelines for artificial intelligence. It proposes that the Secretary of Commerce establish a federal advisory committee on the development and deployment of artificial intelligence. The bill would create a mechanism for the federal government to receive recommendations on creating "a climate of investment and innovation to ensure U.S. global competitiveness," "optimizing the development of artificial intelligence to accommodate potential growth, restructuring, or other changes in the U.S. workforce," "supporting the unbiased development and application of artificial intelligence," and "protecting the privacy rights of individuals."

Among the specific issues the committee is to consider are: competitiveness, impact on the workforce, education, ethics training, data sharing, international cooperation, accountability, machine learning bias, impact on rural areas, government efficiency, investment climate, impact on jobs, bias, and impact on consumers. The Committee is directed to report to Congress and the Administration 540 days after enactment on any necessary legislative or administrative action on AI.

This bill is a step in the right direction, although the field is evolving so rapidly that we would recommend reducing the time frame for a report from 540 days to 180 days. Waiting nearly two years for a committee report will certainly result in missed opportunities and lack of action on important issues. Given the rapid progress in this area, shortening the timeframe for the committee's analysis would be very helpful.

Interaction with representatives of state and local authorities

States and local governments are also taking action on AI. For example, the New York City Council unanimously passed a bill directing the mayor to form a task force to "monitor the fairness and validity of algorithms used by municipal agencies." City officials use algorithms to "determine whether lower bail will be set for an indigent defendant, locate fire stations, assign students to public schools, evaluate teacher performance, detect Medicaid fraud, and determine where the next crime will occur."

According to the bill's drafters, city officials want to know how these algorithms work and make sure there is sufficient transparency and accountability of AI. There are also concerns about the fairness and bias of AI algorithms, so the working group is tasked with analyzing those issues and making recommendations for their future use. By the end of 2019, it is expected to report to the Mayor on a range of policy, legal, and regulatory issues related to AI.

Some observers are already raising concerns that the task force will not be able to hold algorithms sufficiently accountable. For example, Julia Powles of Cornell Institute of Technology and New York University argues that the bill originally required companies to make AI source code publicly available for review and to conduct decision modeling using real-world data. However, after criticism of those provisions, former City Councilman James Vacca dropped those requirements in favor of creating a task force to study the issues. He and other city officials were concerned that releasing classified information about the algorithms would stall innovation and make it difficult to find AI vendors willing to partner with the city. It remains to be seen how the local task force will balance the issues of innovation, privacy and transparency.

Regulation of broad goals more so than specific algorithms

The European Union has taken a restrictive stance on these data collection and analysis issues. It has rules that limit the ability of companies to collect data on road conditions and display street views. Because many of these countries fear that personal data from people on unencrypted Wi-Fi networks could end up in the public data set, the EU fines technology companies, requires copies of the data, and places limits on the material collected. This makes it difficult for technology companies operating in these countries to develop the high-definition maps needed for autonomous cars.

The GDPR being implemented in Europe imposes serious restrictions on the use of artificial intelligence and machine learning. According to published guidelines, "the rules prohibit any automated decisions that 'significantly affect' EU citizens. This includes methods that assess a person's "job performance, economic situation, health, personal preferences, interests, reliability, behavior, location or movements." In addition, the new rules give citizens the right to check how digital services have made specific algorithmic decisions that affect a person.

By taking a restrictive stance on data collection and analysis, the European Union puts its software manufacturers and developers at a disadvantage compared to the rest of the world.

If strictly interpreted, these rules will make it difficult for European software developers (and U.S. software developers working with European counterparts) to incorporate artificial intelligence and high-definition mapping into autonomous vehicles. Central to navigation in such vehicles is tracking location and movement. Without high-definition maps containing geocoded data and deep learning that utilizes this information, fully autonomous driving in Europe will stagnate. With these and other data protection measures, the European Union is putting its manufacturers and software developers at a disadvantage compared to the rest of the world.

It makes more sense to think about the broad goals of artificial intelligence and implement policies to achieve them, rather than trying to crack open black boxes and understand how specific algorithms work. Regulating individual algorithms will limit innovation and make it more difficult for companies to use artificial intelligence.

Take prejudice seriously

Bias and discrimination are serious problems for AI. There have already been a number of cases of unfair treatment related to historical data, and measures need to be taken to ensure that this does not become a prevalent phenomenon in AI. Existing laws governing discrimination in the physical economy should be extended to digital platforms. This will help protect consumers and build trust in these systems as a whole.

Andrew Burt of Immuta states, "A key challenge facing predictive analytics is transparency. We live in a world where data science is taking on increasingly important challenges, and the only thing holding them back is how well the scientists training the models can explain exactly what their models are doing."

Maintaining human oversight and control mechanisms

Some experts argue that it should be possible to provide for human oversight and control of artificial intelligence systems. For example, Oren Etzioni, CEO of the Allen Institute for Artificial Intelligence, argues that there should be rules to regulate these systems. First, he argues that AI should be regulated by all the laws that have already been developed for human behavior, including rules regarding "cyberbullying, stock manipulation or terrorist threats," as well as "engaging humans in the commission of crimes." Second, he argues that these systems should report that they are automated systems, not humans. Third, he argues that "an A.I. system cannot store or disclose sensitive information without explicit authorization from the source of that information." His reasoning is that these tools store so much data that people should be aware of the privacy risks posed by AI.

Similarly, the IEEE Global Initiative has developed ethical guidelines for AI and autonomous systems. Its experts suggest that these models should be programmed to take into account commonly accepted human norms and rules of behavior. AI algorithms should consider the importance of these norms, how to resolve conflicting norms, and how to resolve these norms transparently. According to ethics experts, software should be programmed to be "non-deceptive" and "honest." When failures occur, there should be coping mechanisms in place. In particular, AI should be sensitive to issues such as bias, discrimination and fairness.

A group of machine learning experts argue that it is possible to automate ethical decision-making. Using the cart problem as a moral dilemma, they ask the following question: If an autonomous car gets out of control, should it be programmed to kill its own passengers or pedestrians crossing the street? They developed a "voting system" that asked 1.3 million people to evaluate alternative scenarios, summarized the overall choices, and applied those people's shared viewpoints to a range of car possibilities. This automated the ethical decision-making process in artificial intelligence algorithms to take into account public preferences. This procedure certainly does not reduce the tragedy of a fatal accident, as was the case with Uber, but it does allow AI developers to incorporate ethical considerations into their planning.

Punishing malicious behavior and improving cybersecurity

As with any new technology, it is important to prevent malicious processing aimed at cheating the software or using it for undesirable purposes. This is especially important given the dual purpose of AI, where the same tool can be used for both beneficial and malicious purposes. Malicious use of AI exposes people and organizations to unnecessary risks and undermines the merits of new technology. This includes actions such as hacking, manipulating algorithms, violating privacy and confidentiality, or identity theft. Attempts to hack into AI to obtain confidential information should be seriously penalized as a means of preventing such actions.

In a rapidly changing world where many organizations have advanced computing capabilities, cybersecurity must be given serious consideration. Countries must carefully protect their systems and prevent other nations from compromising their security. According to the U.S. Department of Homeland Security, one of the largest U.S. banks receives about 11 million calls per week at its service center. To protect its telephony from denial-of-service attacks, it uses "a machine learning-based policy system that blocks more than 120,000 calls per month based on voice firewall policies, including threatening calls, robocalls, and potential fraudulent calls." In this way, machine learning can help protect technology systems from malicious attacks.

To summarize, the world is on the cusp of a revolution in many industries thanks to artificial intelligence and data analytics. There are already significant advances in finance, homeland security, healthcare, criminal justice, transportation, and smart cities that have transformed decision-making, business models, risk mitigation, and system performance. These developments are generating significant economic and societal benefits.

The world is on the cusp of a revolution in many industries thanks to artificial intelligence, but the methods of developing artificial intelligence systems require a deeper understanding due to the serious implications these technologies will have on society as a whole.

However, the way in which AI systems evolve has major implications for society as a whole. It affects how political issues are addressed, ethical conflicts are overcome, legal realities are resolved, and how transparent AI and data analytics decisions must be. Human choices in software development affect how decisions are made and how they are integrated into organizational routines. There is a need to better understand exactly how these processes occur because they will have a significant impact on the population soon and for the foreseeable future. AI may well revolutionize human activity and become the most influential innovation in human history.

Let's get in touch!

Please feel free to send us a message through the contact form.

Drop us a line at mailrequest@nosota.com / Give us a call over skypenosota.skype