The adoption of artificial intelligence by Salesforce in delivering training and learning needs to employees is a step in the right direction as the future of learning takes shape. Artificial intelligence will support learning in the future by helping students to acquire knowledge in a digital environment.
The Trailhead application is a good example of digital learning and allows Salesforce employees to reskill and receive achievement badges for every course they complete.
Facial recognition technology remains controversial with calls for legislative action and taming biased algorithms because of misidentification and infringing privacy. The software mistake by Michigan State Police highlights the need for addressing bias from technology.
A black man, Robert Williams according to the facial recognition software by Michigan Police shoplifted but turns out, this was not true. The mistake was purely algorithmic bias from facial recognition.
The first step to protecting public interests is developing technology around the principles of ethics, accountability, and transparency. This will augment technology with human needs by promoting accountability in all areas of interaction. Unfortunately, facial recognition relies on white facial images with no trained diversity and leads to bias.
These and more insights on our Weekly AI Update
Facial Recognition Technology
The ACLU on Wednesday urged policymakers to end law enforcement use of facial recognition technology¹and filed an administrative complaint with Detroit police on behalf of Robert Williams, a Black man who was wrongfully arrested in January after software owned by Michigan State Police misidentified him as a shoplifting suspect.
The group’s call for legislative action came after almost a month of nationwide protests over police mistreatment of Black Americans, which Williams referenced Wednesday in an op-ed for the Washington Post about his experience being arrested on his front lawn while his wife and young daughters watched, then detained for nearly 30 hours.
Lawmakers have called for limiting law enforcement use of #facialrecognition technology. However, there are currently no national restrictions, despite pressure from advocacy groups including the ACLU and Fight for the Future.
Samsung’s R&D Centers and AI Centers
Samsung Electronics Co. said Wednesday it has named a renowned artificial intelligence expert as chief of its research and development hub. The South Korean tech giant said Sebastian Seung, professor at Princeton University’s Neuroscience Institute and Department of Computer Science, will lead Samsung Research, the R&D hub of Samsung’s set business.
Seung, 54, will be responsible for managing Samsung’s 15 R&D centers and seven AI centers in 13 countries to secure future technologies, according to the company. Samsung said it expects Seung, who previously worked at the Massachusetts Institute of Technology (MIT) and Bell Labs, to boost its AI capability in the era of the #fourthindustrialrevolution² by utilizing his experience, research ability and network.
Since 2018, the Korean-American has been helping Samsung’s research and business plans in the AI sector as chief research scientist at Samsung Research, contributing to the establishment of global AI centers and the recruitment of talented scientists. Samsung, the world’s largest memory chip and smartphone maker, said reinforcing its AI capability will also enhance its competitiveness in the non-memory semiconductor sector.
Acrobatic Flight with Drones
Researchers at Intel, the University of Zurich, and ETH Zurich describe an AI system that enables autonomous drones³ to perform acrobatics like barrel rolls, loops, and flips with only onboard sensing and computation. By training entirely in simulation and leveraging demonstrations from a controller module, the system can deploy directly onto a real-world robot without fine-tuning, according to the coauthors.
Acrobatic flight with drones is extremely challenging. Human pilots often train for years to master moves like power loops and rolls, and existing autonomous systems that perform agile maneuvers require external sensing and computation. That said, the acrobatics are worth pursuing because they represent a challenge for all of a drone’s components.
Vision-based systems usually fail as a result of factors like motion blur, and the harsh requirements of high-speed fast and precise control make it difficult to tune controllers — even the tiniest mistake can result in catastrophic outcomes.
The researchers’ technique entails training the above mentioned controller to predict actions from a series of drone sensor measurements and user-defined reference trajectories.
Deep Generative Models
Novel drug design is difficult, costly and time-consuming. On average, it takes $3 billion and 12 to 14 years for a new drug to reach the market.
One third of this overall cost and time is attributed to the drug discovery phase requiring the synthesization of thousands of molecules to develop a single pre-clinical lead candidate.
IBM Research⁴, is leveraging artificial intelligence -based models to expedite this discovery phase at a significantly lower cost. Deep generative models, such as variational autoencoders and generative adversarial networks, are considered promising for computational creation of novel molecules due to their state-of-the-art results in virtual synthesis of images, text, speech, and image captions.
Virtual creation of new and optimal lead candidates requires exploring and performing a multi-objective optimization in a vast chemical space, as the model needs to assess and balance between critical factors such as drug activity, selectivity, toxicity, ease of synthesis, stability, etc. Such multi-objective optimization is handled using either conditional generative models or optimization methods such as #bayesianoptimization.
Mixed-Precision Training
One of the most exciting additions expected to land in PyTorch 1.6, coming soon, is support for automatic mixed-precision training. Mixed precision training is a set of techniques which allows you to use fp16 without causing your model training to diverge. It’s a combination of three different techniques.
Mixed-precision training⁵ is a technique for substantially reducing neuralnet training time by performing as many operations as possible in half-precision floating point, fp16, instead of the single-precision floating point, fp32.
However, up until now these tensor cores have remained difficult to use, as it has required writing reduced precision operations into your model by hand. This is where the automatic in automatic mixed-precision training comes in. The soon-to-be-released torch.cuda.amp API will allow you to implement mixed precision training into your training scripts in just five lines of code!
Artificial Intelligence Framework from Google
Google LLC today launched a new iteration of TensorFlow, its popular artificial intelligence framework, and a pair of complementary modules aimed at enabling algorithms to process user data more responsibly.
TensorFlow 2.0 focuses primarily on improving usability. The release brings a streamlined application programming interface based on Keras, an open-source tool designed to make AI development frameworks easier to use. It enables engineers to access features that were previously spread out across multiple APIs in one place and provides more options for customizing the development workflow.
Another key enhancement is the addition of support for so-called eager execution. TensorFlow 2.0⁶ fires up AI models much faster than previous versions, which lets engineers try out different model variations with shorter delays between test runs. This has the potential to save a considerable amount of time given the highly iterative nature of #machinelearning development.
AI Accelerator Applications
AI software startup Mipsology is working with Xilinx to enable FPGAs to replace GPUs in AI accelerator applications⁷ using only a single additional command. Mipsology’s “zero effort” software, Zebra, converts GPU code to run on Mipsology’s AI compute engine on an FPGA without any code changes or retraining necessary.
Xilinx announced today that it is shipping Zebra with the latest build of its Alveo U50 cards for the data center. Zebra already supports inference acceleration on other Xilinx boards, including Alveo U200 and Alveo U250.
“The level of acceleration that Zebra brings to our Alveo cards puts CPU and GPU accelerators to shame,” said Ramine Roane, Xilinx’s vice president of marketing. “Combined with Zebra, Alveo U50 meets the flexibility and performance needs of AI workloads and offers high throughput and low latency performance advantages to any deployment.”
FPGAs historically were seen as notoriously difficult to program for non-specialists, but Mipsology wants to make FPGAs into a plug-and-play solution that is as easy to use as a CPU or GPU. The idea is to make it as easy as possible to switch from other types of acceleration to FPGA.
AI-powered App: Face Depixelizer
Anew AI-powered app called face depixelizer⁸ can turn pixelated images into high resolution pictures, just like it happens in sci-fi and crime-based movies. Created by russian developer Denis Malimonov, the app uses StyleGAN, where the AI looks for pictures that, when downscaled, will resemble the original pixelated face.
The truth is that face depixelizer doesn’t magically depixelate a photo and reveal the actual person, but rather it can generate an alternative image where it finds a photo with a similar look and turns the pixelated image into a high-res, realistic one.
Once the app was released, users around the world started playing with the AI, which of course would show different results each time, even if the picture was the same. For example, twitter users showcased examples of how famous video game characters — like luigi and mario — would look like.
However, other users started noticing how this AI tool was not accurate when it came to processing black faces. For example, when processing a pixelated picture of Barack Obama, the face depixelizer turned him into a white man. and even if users continued to import different pixelated pictures of obama, the result was consistently wrong, meaning that these data and algorithms are mainly trained primarily with white faces, making them racially biased
Algorithmic Bias
Arecent study suggests that the algorithm used by popular ride-hailing companies Uber and Lyft may actually discriminate against customers seeking transportation in predominantly non-white neighborhoods.
Aylin Caliskan and Akshat Pandey at George Washington University in Washington DC analyzed transportation and census data in Chicago in a paper that assessed whether there was a racial disparity in how much passengers were charged based on location. Their dataset included more than 100 million trips between November 2018 and December 2019, with 68 million of them being made by individual riders.
What they found is that the ride-hailing companies charged higher price per mile for a trip if either the destination or pick-up point has a higher percentage of non-white residents, low-income residents or high education residents.
This is not the first time that Uber and Lyft have been accused of algorithmic bias — or in-person human bias, for that matter. A 2016 study found that racial and gender discrimination were pronounced among drivers for Uber, Lyft and Flywheel
Digital Infrastructure Investments
Chinese internet giant Tencent Holdings said it plans to invest 500 billion yuan (US$70 billion) over the next five years in new digital infrastructure focusing on cloud computing, #artificialintelligence, blockchain technology and #internetofthings, as well as the infrastructure to support them like advanced servers, supercomputers, data centers and 5G mobile networks.
The move follows Shenzhen-based Tencent’s plan to raise fresh capital for general corporate purposes by issuing medium-term notes, with a maximum limit of US$20 billion, to certain professional investors, according to its filing with the Hong Kong stock exchange on Monday.
The infrastructure program includes a new countrywide network of large-scale data centers, with a million servers deployed at each site, according to the company.
That would follow the company’s construction of its largest data center complex on a 51-hectare site, which includes more than 30,000 square metres of tunneled areas inside a 100-metre-high hill, in southwest Guizhou province.
AI shaping Future of Online Learning
Ahead of its virtual TrailheaDX 2020 developer event, Salesforce announced that they will be embedding the company’s Einstein AI software into Trailhead, its online learning platform.
Einstein Recommendations for Trailhead will provide “tailored recommendations” to help learners pick the right skills to complete and badges to earn.
When learners now log into Trailhead, whether through the desktop or amazing Trailhead Go mobile app, they will see extremely personalized and intelligent recommendations for them, for the skills and the badges to complete based on their activity, their role, their career aspirations, and their specific needs to skill up. These would be extremely customized and personalized recommendations powered by Einstein.
The more data that the Einstein and AI algorithm gets about a learner, the more personalized these recommendations become because Salesforce really gets to know the specific details of that learner’s career choices, their role, what kind of badges they like, whether it’s beginner type of badges or advanced or intermediate, et cetera. So all that activity about a learner goes into making these recommendations that are much more personalized over time.
Works Cited
¹Facial Recognition Technology, ²Fourth Industrial Revolution, ³Autonomous Drones, ⁴IBM Research, ⁵Mixed Precision Training, ⁶TensorFlow 2.0, ⁷AI Accelerator Applications, ⁸Face Depixelizer