The Dutch artist, Bas Uterwijk is using artificial intelligence to create human portraits from paintings by combining with deep learning to change statutes to human faces.
The same applies to paintings where the AI software includes photo attributes such as light and variations to make the picture clear. The Artbreeder AI program recreates new images from scratch by using data points, which copy the photos.
The global competitiveness of the United States in artificial intelligence is declining because of poor management at the Department of Defense. Tracking artificial intelligence programs and encouraging data sharing are needed to make the US a global leader in AI.
The Joint Artificial Intelligence Center is creating a standardized AI definition and developing governance policies around artificial intelligence.
Veterans face challenges claiming their benefits and a new machine learning application, Content Classification Predictive Service (CCPS), is spearheading efficient services and accuracy in handling veteran claims. Veterans wait for long as staff members check claims manually but CCPS can review information within a short time.
These and more insights on our Weekly AI Update
AI creating Human-Looking Images
Artificial intelligence is helping to create human-like portraits from statues and paintings of famous faces.
Bas Uterwijk a Dutch native artist used AI to create the photo-style portraits. He focused on well-known figures including Vincent Van Gogh and Napoleon Bonaparte. The #deeplearning technology enabled him to take a photo of a statue or a painting and turn it into a more human-like face. The software uses data points to pick up on facial features and photographic qualities.
The AI is called Artbreeder and can also create human-looking images from scratch. So far, they’ve worked on 50 to 60 of the AI-generated pictures¹. The artist is working on a model that could show Anne Frank at an age she never reached.
Tracking Artificial Intelligence Programs
Poor management of artificial intelligence projects in the Department of Defense could erode the United States’ competitive advantage in the emerging technology, the Defense Department’s watchdog warned in a July 1 report.
The DoD inspector general suggested the Joint Artificial Intelligence Center, established to facilitate the adoption of artificial intelligence tools across the department, take several steps to improve project management, including determining a standard definition of artificial intelligence, improving data sharing and developing a process to accurately track artificial intelligence programs. The JAIC missed a March 2020 deadline to release a governance framework. It still plans to do so, according to the report, but that date is redacted in the report.
The inspector general started the audit to determine the gaps and weaknesses in the department’s enterprise-wide AI governance², the responsibility of the JAIC. After starting its audit, the DoD IG determined the organization had not yet developed a department-wide AI governance framework.
Machine Learning Transforming Veterans Benefits
Veterans deserve fast access to their disability benefits. The Department of Veterans Affairs is using a new #machinelearning tool³ to deliver these benefits to Veterans more quickly.
The tool’s name is not easy to remember — Content Classification Predictive Service (CCPS) Application Programming Interface (API) — but the results are certainly hard to ignore. VA’s Office of Information and Technology (OIT), working collaboratively in partnership with Veterans Benefits Administration (VBA), developed and implemented CCPS to reduce the average time to establish Veteran disability compensation claims by three and a half days.
CCPS is also helping VA improve service to Veterans by increasing the speed and accuracy of disability claims reviews. The tool automatically performs repetitive tasks that formerly required staff review and input.
During its first week of use, CCPS helped VA establish 3,994 out of 8,368 claims (48 percent) automatically without the need for manual intervention. Previously, VBA only processed about two percent of disability compensation claims automatically.
Visual Causal Discovery Network
Researchers at MIT, University of Washington, and the University of Toronto describe an AI system that learns the physical interactions⁴ affecting materials like fabric by watching videos. They claim the system can extrapolate to interactions it has not seen before, like those involving multiple shirts and pants, enabling it to make long-term predictions.
Causal understanding is the basis of counterfactual reasoning, or the imagining of possible alternatives to events that have already happened. For example, in an image containing a pair of balls connected to each other by a spring, counterfactual reasoning would entail predicting the ways the spring affects the balls’ interactions.
The researchers’ system — a Visual Causal Discovery Network (V-CDN) — guesses at interactions with three modules: one for visual perception, one for structure inference, and one for dynamics prediction. The perception model is trained to extract certain keypoints (areas of interest) from videos, from which the interference module identifies the variables that govern interactions between pairs of keypoints.
Encouraging Growth in AI Research
The National Research Cloud, which has bipartisan support in Congress, gained approval of several universities, including Stanford, Carnegie Mellon and Ohio State, and participation of Big Tech companies Amazon, Google and IBM.
The project would give academics access to a tech companies’ #clouddata centers and public data sets, encouraging growth in AI research⁵. Although the Trump administration has cut funding to other kinds of research, it has proposed doubling its spending on AI by 2022.
The research cloud, though a conceptual blueprint at this stage, is a sign of the largely effective campaign by universities and tech companies to persuade the American government to increase government backing for research into #artificialintelligence largely due to its recognition that AI technology is essential to national security and economic competitiveness.
Artificial Intelligence assisted Robot Delivery
Refraction AI’s last-mile delivery robot⁶, the REV-1, has seen an increase in lunch delivery requests since the start of the coronavirus pandemic. Unsurprisingly, this contactless delivery option is now seeing a demand surge amid the coronavirus pandemic: Refraction AI has received three to four times more orders with the REV-1 since the start of the pandemic.
The company, which first launched in July 2019, built the robot specifically for last-mile deliveries between stores and customers in urban communities like Ann Arbor, Mich., where the pilot program is now taking place.
Customers in the Ann Arbor community who live within the 2.5-mile delivery radius can sign up for REV-1’s pilot lunch delivery program that’s partnered with four three Asian and one Mexican restaurants, according to Refraction AI . There are also currently more potential partners still on a waitlist.
AI-enabled Robotics for Waste Recycling
When China restricted the importation of recyclable waste products in 2018, many western companies turned to robotic technologies to strengthen their processing capabilities. To recycle in a cost-effective, comprehensive and safe way, goods must be broken down into their constituent commodities to be sold on, in a process that has been likened to “unscrambling an egg”.
Roboticists think that computer vision, neural networks and modular robotics can enable a more intelligent, flexible approach to recycling. AI-enabled #robotics⁷ can identify items based on visual cues such as logos, colour, shape and texture, sorting them and taking them apart.
It can spot a Nestlé logo depicting a cow and surmise that it is a dairy product. Such systems excel at identifying small items, such as the coffee pods used in Nespresso machines, which, while technically recyclable, are not always recycled.
The Montreal AI Ethics Institute
The Montreal AI Ethics Institute, a nonprofit research organization dedicated to defining humanity’s place in an algorithm-driven world, today published its inaugural State of AI Ethics report⁸. The 128-page multidisciplinary paper, which covers a set of areas spanning agency and responsibility, security and risk, and jobs and labor, aims to bring attention to key developments in the field of AI this past quarter.
The State of AI Ethics first addresses the problem of bias in ranking and recommendation algorithms, like those used by Amazon to match customers with products they’re likely to purchase. The authors note that while there are efforts to apply the notion of diversity to these systems, they usually consider the problem from an algorithmic perspective and strip it of cultural and contextual social meanings.
The authors advocate a solution in the form of a framework that does away with rigid, ascribed categories and instead looks at subjective ones derived from a pool of “diverse” individuals: determinantal point process (DPP). Put simply, it’s a probabilistic model of repulsion that clusters together data a person feels represents them in embedding spaces — the spaces containing representations of words, images, and other inputs from which AI models learn to make predictions.
An Ethical Eye on AI
Researchers from the University of Warwick, Imperial College London, EPFL (Lausanne) and Sciteb Ltd have found a mathematical means of helping regulators and business manage and police Artificial Intelligence systems’ biases towards making unethical, and potentially very costly and damaging commercial choices — an ethical eye on AI.
Artificial intelligence is increasingly deployed in commercial situations such as using AI to set prices of insurance products⁹ to be sold to a particular customer. There are legitimate reasons for setting different prices for different people, but it may also be profitable to ‘game’ their psychology or willingness to shop around.
The AI has a vast number of potential strategies to choose from, but some are unethical and will incur not just moral cost but a significant potential economic penalty as stakeholders will apply some penalty if they find that such a strategy has been used — regulators may levy significant fines of billions of Dollars, Pounds or Euros and customers may boycott you — or both.
So in an environment in which decisions are increasingly made without human intervention, there is therefore a very strong incentive to know under what circumstances AI systems might adopt an unethical strategy and reduce that risk or eliminate entirely if possible.
Spearheading Data Science Initiatives
Princeton University researchers will push the limits of data science by leveraging artificial intelligence and machine learning across the research spectrum in an interdisciplinary pilot project made possible through a major gift from Schmidt Futures.
The Schmidt DataX Fund will help advance the breadth and depth of data science impact on campus, accelerating discovery in three large, interdisciplinary research efforts and creating a suite of opportunities to educate, train, convene and support a broad data science community¹⁰ at the University.
The Schmidt DataX Fund will be used to enhance the extent to which data science permeates discovery across campus and infuses machine learning and artificial intelligence into a range of disciplines. Many researchers and educators are eager to bring data science to their fields but lack the expertise, experience and tools.
The funds will support a range of campus-wide data science initiatives led by the Center for Statistics and Machine Learning, including: development of graduate-level courses in data science and machine learning; creation of mini-courses and workshops to train researchers in the latest software tools, cloud platforms and public data sets.
Neutralizing COVID-19 with Robotics
MIT’s Computer Science and Artificial Intelligence Laboratory is developing complex spaces easier to sanitize. Working closely with the Ava Robotics and the Greater Boston Food Bank (GBFB), CSAIL team created a UVC structure that disinfects surfaces and neutralizes coronavirus particles lingering in the air. Fitted atop an Ava Robotics base, the robot could be trained to navigate spaces #autonomously in the future.
The ultraviolet light works best on directly visible surfaces, but even reflected light in nooks and crannies is effective. During tests at GBFB’s warehouse, the prototype robot was teleoperated to get the lay of the land, but it’s equipped to navigate the area without supervision someday. The robot slowly moves through the 4,000 square foot warehouse, neutralizing 90 percent of coronaviruses¹¹ on surfaces within half an hour.
Deloitte AI Institute for Research and Applied Innovation
Deloitte has opened the Deloitte AI Institute for research and applied innovation. The institute will publish cutting edge research, covering focus areas such as global advancements, the future of work, AI ethics, and case studies. The premier publications will include the bi-annual State of AI in the Enterprise study, as well as the Trustworthy AI framework for ethics¹².
The institute’s network will also bring together top industry thought leaders and academics, startups, R&D groups, entrepreneurs, investors, and innovators. To this group, Deloitte will add its applied AI knowledge and understanding of industry pain points in order to help clients transform quickly with AI.
The network’s thought leaders will also include prominent ethicists, who will work with Deloitte and top stakeholders from all parts of society to co-design effective policies for AI ethics.
Works Cited
¹AI-Generated Pictures, ²AI Governance, ³Machine-Learning Tool, ⁴Physical Interactions, ⁵Encouraging Growth in AI Research, ⁶Delivery Robot, ⁷AI-enabled Robotics, ⁸State of AI Ethics Report, ⁹Insurance Products, ¹⁰Data Science Community, ¹¹Coronaviruses, ¹²Trustworthy AI Framework for Ethics