Intel and Tencent debut AI-powered camera systems for retail

Tencent and Intel are teaming up to launch a pair of artificially intelligent (AI) products for retail, the two announced today during Tencent’s Global Partner conference in China. Both products were developed by Tencent’s YouTu Lab — its computer vision research division — and have Intel’s Movidius Myriad chips inside. The first is DeepGaze, an AI-powered camera for brick-and-mortar stores that keeps tabs on shoppers’ movements. It can track the number of customers near a given shelf display at various times throughout the day and perform hybrid object detection — some on-device and the rest in Tencent’s Intel Xeon Scalable processor-based cloud. DeepGaze sports the Movidius Myriad 2 vision processing unit (VPU), the same chip inside Google’s Clip camera, Flir’s Firefly, and DJI’s Phantom 4 drone. It’s optimized for image signal processing and inference — the point at which a trained AI model makes predictions — on-device. “With artificial intelligence, enterprises can gain new insights about their customers to both elevate the users’ experience and drive business transformation,” said Remi El-Ouazzane, vice president and chief operating officer of Intel’s AI Products Group. He said Tencent’s new solution takes advantage “of powerful Intel … chips to enable deep neural networks to run directly on the cameras, providing real-time and actionable data for various businesses, including retail and smart buildings.” DeepGaze complements the YouBox, also announced today. It’s an on-premises server similarly designed for retail that, with the help of onboard AI systems, can ingest real-time feeds from up to 16 cameras and derive useful insights. Store owners can use it to predict sales performance and product turnover, Intel and Tencent said, enabling them to restock shelves without the need for manual inventory management. Under the hood is the Movidius Myriad X VPU, which features a dedicated hardware accelerator for AI computations. “Intel is the perfect partner for our flexible enterprise solutions,” said Simon Wu, general manager at Tencent’s YouTu Lab. “Based on Intel Movidius Myriad chips and VPUs, the YouTu camera and box perform inference at the edge in tandem with Intel Xeon Scalable processors in the cloud to provide cost-effective and flexible solutions for verticals including retail and construction.”

Tracking customers in the real world

With DeepGaze and YouBox, Tencent and Intel are dipping a tentative toe into an increasingly lucrative market: AI-driven retail analytics. They’re not the first. In June, Japan telecom company NTT East collaborated with startup Earth Eyes to create AI Guardsman, a machine learning system that attempts to prevent shoplifting by scanning live camera feeds for suspicious activity. Firms like Standard Cognition and Trigo, meanwhile, are leveraging machine learning to build cashierless, data-rich shopping experiences in physical stores. Tel Aviv-based Trigo, like AI Guardsman, taps a network of cameras to track customers through isles, automatically tabulate their bills, and surface coupons and other engagement opportunities. Standard Condition — which opened a location in San Francisco last month — offers its retail partners and the customers who shop with them a comparable AI-driven solution, as does Zippin, which also debuted a checkout-free store in the Bay Area recently. Amazon’s the elephant in the room. Its Amazon Go store chain employs sensors, AI, and smartphones to streamline retail flows, and it’s reportedly bent on nationwide expansion: Bloomberg reported in September that it plans to open as many as 3,000 locations by 2021, up from the four operating today. Even Microsoft’s said to be working on cashierless store technology.

AI-first strategies

Both Tencent and Intel see AI as a key part of their respective growth strategies. Tencent’s AI funding arm is one of the largest of its kind; the company has poured more capital into startups and AI chips than its biggest Chinese rivals, Baidu and Alibaba. One of its largest single investments is in robotics startup UBTech, which aims to develop a humanoid robot capable of walking downstairs and autonomously navigating unfamiliar environments. In 2017, Tencent opened an AI research lab in Bellevue, Washington led by Dr. Dong Yu, a former Microsoft engineer and pioneer in speech recognition tech. (The company’s other AI lab is based in Shenzhen.) And its YouTu Lab, which recently open-sourced some of its developer tools, is working with customers like China Unicom and WeBank on facial ID authentication. For Intel’s part, partnerships with OEMs like Tencent are a step toward its ambitious goal of capturing the $200 billion AI market. In August, it bought Vertex.ai, a startup developing a platform-agnostic AI model suite, for an undisclosed amount. Meanwhile, the chipmaker’s acquisition of Altera brought field programmable gate array (an integrated, reconfigurable circuit) into its product lineup, and its purchases of Movidius and Nervana bolstered its real-time processing portfolio. Of note, Nervana’s neural network processor, which is expected to begin production in late 2019, can reportedly deliver up to 10 times the AI training performance of competing graphics cards. “After 50 years, this is the biggest opportunity for the company,” Navin Shenoy, executive vice president at Intel, said at the company’s Data Centric Innovation Summit this year. “We have 20 percent of this market today … Our strategy is to drive a new era of data center technology.”

Microsoft today announced it is embracing Chromium for Edge browser development on the desktop. The news includes plenty of exciting changes, including the decoupling of Edge from Windows 10, more frequent updates, and support for Chrome extensions. But we also wanted to find out what other major browser makers think of the news. Google largely sees Microsoft’s decision as a good thing, which is not exactly a surprise given that the company created the Chromium open source project. “Chrome has been a champion of the open web since inception and we welcome Microsoft to the community of Chromium contributors,” a Google spokesperson told VentureBeat. “We look forward to working with Microsoft and the web standards community to advance the open web, support user choice, and deliver great browsing experiences.” What Google’s statement doesn’t say is the company still isn’t happy with Edge. The Microsoft Store still doesn’t allow non-EdgeHTML browsers, meaning devices running Windows 10 S Mode can’t install Chrome, Firefox, or any third-party browser. Microsoft has yet to say if that will change. Mozilla meanwhile sees Microsoft’s move as further validation that users should switch to Firefox. “This just increases the importance of Mozilla’s role as the only independent choice,” a Mozilla spokesperson told VentureBeat. “We are not going to concede that Google’s implementation of the web is the only option consumers should have. That’s why we built Firefox in the first place and why we will always fight for a truly open web.” Mozilla regularly points out it develops the only independent browser — meaning it’s not tied to a tech company that has priorities which often don’t align with the web. Apple (Safari), Google (Chrome), and Microsoft (Edge) all have their own corporate interests.

A Chromium-based Edge means a lot for the few users that actively use Edge, but much more interesting will be the impact on the broader web. Chrome dominates already — will this only cement its place or will the competition heat up? We also contacted Apple and Opera and will update this story if we hear back. Update at 12:10 p.m. Pacific: Opera thinks Microsoft is making a smart move, because it did the same thing six years ago. “We noticed that Microsoft seems very much to be following in Opera’s footsteps,” an Opera spokesperson told VentureBeat. “Switching to Chromium is part of a strategy Opera successfully adopted in 2012. This strategy has proved fruitful for Opera, allowing us to focus on bringing unique features to our products. As for the impact on the Chromium ecosystem, we are yet to see how it will turn out, but we hope this will be a positive move for the future of the web.”

Samsung to build 5G and V2X networks for autonomous car tests at South Korea’s K-City

Samsung is collaborating with the Korea Transportation Safety Authority (KOTSA) to develop mobile network infrastructure for autonomous vehicles at the recently opened K-City test facility. K-City, for the uninitiated, is one of a number of “fake cities” that have emerged as test beds for the latest smart city technologies. Google parent company Alphabet last year offered a glimpse into Castle, a key test hub for its driverless car subsidiary, Waymo. Incidentally, Waymo launched its first commercial self-driving car service in Phoenix just yesterday. And Russia opened a tech-focused “town” called Innopolis back in 2012, where Yandex recently kickstarted tests for its autonomous taxis. Against that backdrop, the South Korean government announced K-City back in May, though it only partially opened for business last month.

Largest

At 320,000 square meters, Korea is touting K-City as the largest dedicated facility for testing self-driving cars in the world, built to replicate all manner of real-world scenarios, including bus lanes, bike lanes, highways, built-up urban areas, parking bays, and more. Situated about an hour’s drive south of Seoul in the city of Hwaseong, K-City cost around $10 million to build.

5G … or not 5G

For those who haven’t followed the latest developments in 5G, the fifth generation of mobile communications, it represents much more than crazy download speeds on your smartphone — though that will be one benefit. Effectively, 5G will be the biggest enabler of artificial intelligence (AI) and many other technologies across the smart city and autonomous vehicle spectrum. Samsung is one of the technology companies at the forefront of the 5G push, and it recently set aside $22 billion to plow into a range of transformative technologies, including 5G and AI. That Korean juggernaut Samsung has been selected to help build the infrastructure underpinning autonomous car tests at K-City should perhaps come as little surprise. “The prominence of autonomous vehicles and connected cars is growing rapidly in the 5G era, and Samsung’s commitment to collaborative innovation in this area is stronger than ever,” said Samsung executive Jaeho Jeon in a press release. The Samsung/KOTSA collaboration isn’t just about 5G, however — it will cover 4G LTE, vehicle-to-everything (V2X) communication systems, and related hardware infrastructure. “By building various telecommunication networks — including 5G, 4G, and V2X — in one place, K-City will provide real-world experiences of autonomous driving for people and businesses across the industry,” added KOTSA director Byung Yoon Kwon. “This open environment is expected to be served as a unique innovation lab for industry partners that will ultimately [accelerate] the availability of the autonomous driving era.”

1 billion AR/VR ad impressions: What we’ve learned

Two years ago, Mark Zuckerberg donned a VR headset at Mobile World Congress against a backdrop that read “the next platform.” It sparked fervor and big investment in the VR space. But in the past year, many critics have questioned the viability of the industry, as headset sales underwhelm and buzzy technologies like blockchain and AI debut as the new starlets in town.And yet, as an immersive ad serving technology working closely with brands, publishers, and producers, we have seen the demand for VR/AR marketing solutions accelerate, even amidst the supposed slump of VR. A year ago we served 100 million VR ad impressions. This year, we’ve served over 1 billion.As we set out to show how immersive advertising beats traditional digital advertising, we learned a few lessons along the way. Here are our three key learnings from serving 1 billion VR/AR ad impressions.

Prove campaign performance

A year ago, our top goal was identifying how VR could best be applied to brands’ marketing objectives. We learned that brands investing in VR care most about 1) deep audience engagement and 2) audience reach. This still holds true today, but brands are now expecting real ROI from their investment into this medium. A year ago, it was easier to convince brands to try VR as a trendy innovation test. Now, the trial period is over and a clear explanation of VR’s contribution to meeting (and exceeding) marketing objectives is needed for brands to continue to invest. It is important to not only sell VR, but to sell solutions that meet customers’ business objectives. It is easy to say that VR performs, but it is an entirely different equation when it comes to proving it. Advertising technology for digital media has become a precise science over the past 20 years. As such, brand’s expectations regarding the accuracy of what is reported is extremely high (and rightfully so). It is not enough to simply create a piece of VR content. Brands need to know how many people saw, engaged, watched and ultimately converted as a result of their investment into this content. This is how brands and their agencies currently operate. As an industry, we will scale faster if we can fit into our client’s existing campaign operations. This means that as a provider that serves VR ads, we not only report on Viewability, Completion Rate, and Engagement Rate, but we also ensure we can reliably compare how a 360-degree VR ad performs to what our clients are currently running in 2D formats, using standard metrics. To prove the uplift of VR ads over 2D in an accurate and client-friendly way, we take the following approach:

  • Run both existing 2D creative and new 360-degree VR experiences to compare the performance across the same placement
  • Use standard ad tracking tools to measure the uplift
  • To validate the data further, install 3rd party tracking pixels from companies like DoubleClick, MOAT, and DoubleVerify

We do this for all of our campaigns.  And the results are clear: 360-degree VR ads outperform 2D. To further understand these results, we ran ads with different fields-of-view: 90 degrees, 180 degrees, and 360 degrees. And we noticed something interesting. As the field-of-view broadened, CTRs and engagement time increased. This means that the bigger the content sphere, the more engaging (and immersive) the content is. With this we clearly and indisputably meet brand objective No. 1: deepen audience engagement. We also tested performance across different verticals and media segments to see if we could replicate these results. We found that this uplift in performance is consistent across all industries and use cases. In all industries, the same client and campaign will see greater performance from immersive ads compared to the 2D creative they typically run. And this is why our business in particular has seen repeat business. Customers like Universal Pictures, Travel Nevada, Clorox, Cathay Pacific, The Home Depot, and Disney Broadway have launched multiple VR ad campaigns. These are examples of companies that are not only want creative innovation, but require performance results to continue to invest in this medium. We’re seeing a shift coming but it will take some work to guarantee it. Eighty-eight billion dollars each year goes into digital advertising; 99 percent of this spend is going into 2D experiences, even when the data clearly shows that consumers are more engaged with 3D content. It’s the VR industry’s job to prove and communicate this.

Deliver experiences that scale across platforms

In addition to performance and deepening audience engagement, we can’t forget about ensuring reliability in our technology solution. Remember the days when Internet Explorer, Firefox and Chrome all rendered the same website in a different way? Distributing VR content across all platforms and browsers faces a similar challenge today. It’s like a rewind back to Web 2.0, except worse. Today, the modern web era includes mobile web browsers like Safari, a variety of Android browsers, and embedded web viewers inside mobile native apps. Each environment has its own restrictions, which VR must overcome. No brand wants to deliver a broken experience to their audience because of these restrictions. We needed to address this fragmentation to make sure we met brand objective No. 2: audience reach. If the audience needs to download an app and/or receives a broken experience when trying to access the VR content, than brands cannot maximize their content’s reach. We found that even YouTube’s 360-degree video player doesn’t work on iPhone (even with the newest Safari and iOS) nor on many Android browsers. As such, we tailored our 3D graphics rendering technology to ensure high audience engagement across all browsers, web and mobile. To be successful in bringing innovation to customers, it is critical to have your product work seamlessly where audiences are today.

Show, don’t tell

In any new exciting industry, there will be a lot of new players and noise. While anyone can speak to their capabilities, the best way to convince a brand or agency to commit their dollars to your product is a live demo on their devices. Customers need to see it to believe it. We found that talking about the promise of our solution wasn’t enough. This is especially applicable in VR and AR, where most companies are very new and the customer is not yet familiar with their product. This is why every new product needs a live demo before for the sales and marketing pitch begins. This may sound like common sense, but if you do a quick review of most VR and AR startups you will see that more than 90 percent only have marketing material on their landing page with no live demo or self-service product to experience. Our first customer in the US, The New York Times, chose to use our platformbecause they could first try a live demo of our ad solution. After an engineer validated the technology, they reached out with confidence that we were the right partner for their VR advertising needs. Live demos are critical to show not tell the magic of immersive content and we found we had to do this for every ad format available in the industry today. Here are a few examples:

Force Push turns you into a Jedi in VR

Ever wanted to wield the power of a Jedi inside VR? This new system from Virginia Tech researchers lets you do just that. Force Push is a new object manipulation system for VR being worked on at the institution’s College of Engineering. It uses hand-tracking (namely a Leap Motion sensor fitted to the front of the Oculus Rift) to allow users to push, pull and rotate virtual objects from a distance, just like a Skywalker would. Run Yu, Ph.D. Candidate in the Department of Computer Science, and Professor Doug Bowman have been working on it for some time, as can be seen in the video below. The pair’s research was recently published in a new report. As the footage shows, objects are moved simply by gesturing in the way you want them to go. Motion towards yourself to bring an item closer towards you, flick your hand up to raise it off of the ground and, of course, push your hand outwards to have it shoot off into the distance. You can even raise your index finger and make a rotating motion to turn the object around. It’s a pretty cool system, though we’d like to see it working without the repeated gestures. Hand-tracking itself is some ways out from full implementation inside VR headsets, but laying groundwork such as this will help make it a more natural fit if and when it does get here. “There is still much to learn about object translation via gesture, such as how to find the most effective gesture-to-force mapping in this one case (mapping functions, parameters, gesture features, etc.),” the pair wrote in their report. “We plan to continue searching for improved transfer functions from the gesture features to the physics simulation. Further evaluation of Force Push will focus on more ecologically valid scenarios involving full 3D manipulation.” Now if only we could use this is an actual Star Wars VR game?

Vehicle telematics data could unlock $1.5 trillion in future revenue for automakers

Vehicle telematics, the method of monitoring a moving asset like a car, truck, heavy equipment, or ship, with GPS and onboard diagnostics, produces an extraordinarily large and fast-moving stream of data that did not exist even a few years ago. And now, the vehicle telematics data hose has been turned to full blast. By 2025, there will be 116 million connected cars in the U.S. — and according to one estimate by Hitachi, each of those connected cars will upload 25 gigabytes of data to the cloud per hour. If you do the math, that’s 219 terabytes each year, and by 2025, it works out to roughly 25 billion terabytes of total connected car data each year. It’s a tsunami of data that did not exist even a few years ago, and it’s about to transform the transportation industry, says Grant Halloran, Chief Marketing Officer at OmniSci.

An entirely new transportation industry

For auto manufacturers, revenue used to come almost exclusively from one-time vehicle sales and trailing maintenance. But as populations are becoming more urban and traffic congestion becomes a bigger problem, this puts downward pressure on the number of cars demanded (and reduces margins on one-time car sales). “There are these irreversible trends going on in the marketplace, like ride sharing, better (and new forms of) public transport and increasing urbanization, which cause people to be less and less likely over time to buy their own car,” Halloran says. “The automakers are saying, we have this hub of data we control, but how are we going to monetize it?” The data that connected cars and autonomous vehicles produce open up entirely new revenue streams that the automaker can control (and share with partners in other sectors). According to McKinsey, monetizing onboard services could create USD $1.5 trillion – or 30 percent more – in additional revenue potential by 2030, which will more than offset any decline in car sales. And this data on how a driver and vehicle interact can also give automotive manufacturers, logistics companies, fleet managers, and insurance companies valuable information on how to make transportation safer, more efficient, and more enjoyable — but they must be able to handle the new huge streams of data and analyze those to extract insights.

What is vehicle telematics?

Vehicle telematics is a method of monitoring and harvesting data from any moving asset, like a car, truck, heavy equipment, or ship by using GPS and onboard diagnostics to record movements and vehicle condition at points in time. That data is then transmitted to a central location for aggregation and analysis, typically on a digital map. Telematics can measure location, time, and velocity; safety metrics such as excessive speed, sudden breaking, rapid lane changes, or stopping in an unsafe location, as well as maintenance requirements; and in-vehicle consumption of entertainment content. “For example, we have a major automaker doing analysis of driver behavior for improvements to vehicle design and potentially, value-added, in-car information services to the driver,” Halloran says. Traditional analytics systems are unable to handle that extreme volume and velocity of telematics data, and they don’t have the ability to query and visualize it within the context of location and time data, also known as spatiotemporal data. Next-generation analytics tools like OmniSci enable analysts to visually interact with telematics data at the speed-of-curiosity

The challenges of extracting insights from telematics data

The insights are there; the discovery is the difficult part, as per usual when it comes to data analytics. But vehicle telematics pose some unique obstacles that industry leaders are scrambling to tackle. The data challenges are enormous. Mainstream analytics platforms can’t handle the volume of the data generated, or ingest data quickly enough for real-time use cases like real-time driver alerts about weather and road conditions. And very few mainstream platforms can manage spatiotemporal data​. Those that do slow to a crawl at a few hundred thousand records, a miniscule volume compared to what connected cars are already generating. Data wrangling​ has also become a stumbling block. Automakers have already built dedicated pipelines for known data streams, primarily from in-car data generation. But this requires large footprints of hardware, and as new data sources arise, those are very difficult to ingest and join with existing data sources. IT departments spend a lot of low-value time and money just wrangling data so that they can try to analyze it.

Tackling the challenges

Because telematics data is so variable and contextual, it is essential that humans explore those big data streams, Halloran says. For vehicle telematics analysis, you need to be able to query billions of records and return results in milliseconds, and also load data far more quickly than legacy analysis tools can, particularly for streaming and high-ingest-rate scenarios. You need to tackle spatiotemporal data with hyper-speed, as you calculate distances between billions of points, lines, or polygons or associate a vehicle’s location at a point in time with millions of geometric polygons, which could represent counties, census tracts, or building footprints. Vehicle telematics data, like other forms of IoT data, is a valuable resource for data scientists who want to build machine learning (ML) models to improve autonomous-driving software and hardware and predict maintenance issues. Machine learning is often presented as conflicting with ad hoc, data analysis by humans. Not so, says Halloran. Exploratory data analysis (or EDA) is a necessary step in the process of building ML models. Data scientists need to visually explore data to identify the best data features to train their models, or combine existing features to create new ones, in a process called feature engineering. Again, this requires new analytics technology to be done at scale. Transparency is also essential with machine learning, especially in regulated industries like automotive and transport, Halloran adds. When models are in production, making autonomous recommendations, data scientists have a need to explain their black-box models to their internal business sponsors and potentially to regulators. Business leaders are reticent to allow machine learning models to make important decisions if they can’t understand why those decisions are made. “ML models can’t be fired. Human decision-makers can,” notes Halloran. An intuitive, interactive visualization of the data in the model allows data scientists to show others what the model “sees in the data” and more easily explain its decisions, allowing decision-makers to be confident that machine-driven predictive decisions will not breach laws. “One of our automotive customers calls this ‘unmasking the black box,” says Halloran.

Point of no return: the impact on other industries

Automotive and mobility is generalizing into a much broader set of solutions that crosses a lot of traditional industry segments. It’s not just automakers now that are doing mobility. Telecommunications companies are helping transmit data or delivering infotainment into a car. Civic authorities want to look at this data to figure out which roads they should repair and how they can improve mass transit. Retailers want to advertise to people in the car or provide a high-end concierge experience as buyers travel to shopping destinations. “For the future, if the automakers do claim ownership of the primary source of mobility data, they will build partnerships across traditional barriers that have divided industries,” Halloran says. “That provides new opportunities for cooperation, and also new opportunities for competition. One of the best ways to come out ahead in that new landscape is to understand what the data tells them, so that they can go into the relationships that are going to be the most profitable for them with that telematics data.”

StarVR puts developer program ‘on hold’ as financial woes roil Starbreeze

Less than a month after StarVR started accepting applications for its $3,200 developer kit program, the company has confirmed to UploadVR that it’s putting the process “on hold.” Last month, StarVR stated that its first production units for StarVR One were ready. Developers could apply to purchase the headset, which featured 210-degree horizontal-by-130-degree vertical field of view, dual AMOLED panels, integrated eye-tracking and SteamVR 2.0 tracking (though no SteamVR base stations to actually track the device). Thursday, we also reported on the StarVR’s claims that its headset would be the first to support the new VirtualLink standard. But trouble was brewing surrounding the announcement. Ahead of the launch, StarVR announced that it was delisting StarVR from the Taipei Exchange Emerging Markets board, citing the current state of the VR industry as one reason. Then, earlier this week, we learned that headset creator Starbreeze, which now owns around a third of StarVR (the other two-thirds belonging to Acer), had filed for reconstruction with the Stockholm District Court. Its offices have been raided this week, leading to one arrest linked to insider-trading. Today a StarVR spokesperson provided UploadVR with the following statement: “We believe it is the most responsible course of action to put the StarVR Developer Program on hold while there are uncertainties with our key overseas shareholder, and also while our company is in the process of going private, which may entail some changes to our operations.” The same message has been sent to anyone that had enrolled in the program thus far. The statement certainly seems to refer to Starbreeze’s current difficulties. It’s uncertain what this means for the future of the VR headset, which had been designed for location-based and enterprise experiences. One thing is likely; developers will have to wait at least a little longer to get their hands on the hardware if it does indeed ever reach their doorsteps.

VR veterans found Artie augmented reality avatar company

The migration of virtual reality veterans to augmented reality continues. A new AR startup dubbed Artie is coming out of stealth mode today in Los Angeles with the aim of giving you artificial intelligence companions in your own home. Armando Kirwin and Ryan Horrigan started the company to use artificial intelligence and augmented reality to build “emotionally intelligent avatars” as virtual companions for people. Those avatars would be visible anywhere that you can take your smartphone or AR gear, Horrigan said in an interview. The startup has backing from a variety of investors, including YouTube cofounder Chad Hurley, Founders Fund, DCG, and others. But Kirwin said the company isn’t disclosing the amount of the investment yet. The company’s software will enable content creators to bring virtual characters to life with its proprietary Wonderfriend Engine, which makes it easy to create avatar-to-consumer interactions that are lifelike and highly engaging. Kirwin said the company is working with major entertainment companies to get access to familiar characters from famous brands. “Our ambitions is to unlock the world of intellectual property you are already familiar with,” said Kirwin, in an interview with VentureBeat. “You can bring them into your home and have compelling experiences with them.” The company hopes to announce some relationships in the first quarter, Kirwin said. Once created, the avatars then exist on an AR network where they can interact and converse with consumers and each other. It reminds me of Magic Leap’s Mica digital human demo, but so far Artie isn’t showing anything quite as fancy as that yet. “The avatar will use AI to figure out whether you are happy or sad and that would guide it in terms of the response it should have,” Kirwin said. “Some developers could use this to create photoreal avatars or animated characters.” Artie is also working on Instant Avatar technology to make its avatars shareable via standard hyperlinks, allowing them to be discovered on social media and other popular content platforms (i.e. in the bio of a celebrity’s Instagram account, or in the description of a movie trailer on YouTube). Horrigan said that the team has 10 people, and it is hiring people with skills in AI, AR, and computer vision. One of the goals is to create avatars who are more believable because they can be inserted in the real world in places like your own home. The team has been working for more than a year. “Your avatar can be ready, so you don’t have to talk to it to activate it,” Kirwin said. “It’s always on, and it’s really fast, even though it is cloud based. We can recognize seven emotional states so far, and 80 different common objects. That’s where the technology stands today.” Horrigan was previously chief content officer of the Comcast-backed immersive entertainment startup Felix & Paul Studios, where he oversaw content and business development, strategy and partnerships. Ryan and his team at Felix & Paul forged numerous partnerships with Fortune 500 companies and media conglomerates including Facebook, Google, Magic Leap, Samsung, Xiaomi, Fox and Comcast, and worked on projects with top brands and A-list talent such as NASA and Cirque du Soleil. One of Felix & Paul’s big projects was a virtual reality tour of the White House with the Obamas. That project, The People’s House, won an Emmy Award for VR, as it captured the White House as the Obama family left it behind. Prior to Felix & Paul, Horrigan was a movie studio executive at Fox/New Regency, where he oversaw feature film projects including Academy Award Best Picture Winner 12 Years A Slave. He began his career in the Motion Picture department at CAA and at Paramount Pictures. Ryan has given numerous talks, including at Ted, Cannes, Facebook, Google, Sundance, SXSW and throughout China. He holds a Bachelor’s in Film Studies and lives in Los Angeles, California. Kirwin has focused on VR and AR in both Hollywood and Silicon Valley. He has helped create more than 20 notable projects for some of the biggest companies in the world. These projects have gone on to win four Emmy nominations and seven Webby nominations. Prior to co-founding Artie, Kirwin helped create the first 4K streaming video on demand service, Odemax – which was later acquired by Red Digital Cinema. He was later recruited by Chad Hurley, cofounder and ex-CEO of YouTube, to join his private technology incubator in Silicon Valley. Prior to his career in immersive entertainment, Kirwin worked on more than 50 projects, predominantly feature films, which include “The Book of Eli,” the first major motion picture shot in digital 4K. He also acted as consultant to vice president of physical production at Paramount Pictures. Other investors include Cyan Banister (investing personally), The Venture Reality Fund, WndrCo, M Ventures, Metaverse Ventures, and Ubiquity6 CEO Anjney Midha. Artie has already cemented partnerships with Google and Verizon for early experiments with its technology and is beginning to onboard major media companies, celebrities, influencers, and an emerging class of avatar-based entertainment creators.

Kaggle users can now create Google Data Studio dashboards

Kaggle, a Google-owned community for AI researchers and developers that offers tools which help to find, build, and publish datasets and models, is integrating with Google’s Data Studio. The Mountain View company announced the news in a blog post timed to coincide with the NeurIPS 2018 conference in Montreal this week. Starting this week, users can connect to and visualize Kaggle datasets directly from Data Studio using Kaggle’s Community Connector tool. It’s as simple as browsing for a dataset within Kaggle, picking a file, launching Data Studio with the selected file, and creating an interactive dashboard with Data Studio’s built-in tools. From that point, the dashboard can be published and embedded in a website or blog. Google is also making available the connector code for the integration in open source in the Data Studio Open Source Repository, which it says will help Data Studio developers and Kaggle users to build “newer and better solutions.” “[With] this new integration, users can analyze these datasets in Kaggle; and then visualize findings and publish their data stories using Data Studio,” Minhaz Kazi, a developer advocate at Google, and Megan Risdal, product lead at Kaggle Datasets, wrote in a blog post. “Since there is no cost to use Data Studio and the infrastructure is handled by Google, users don’t have to worry about scalability, even if millions of people view the dashboard … The hassle-free publishing process means everyone can tell engaging stories, open up dashboards for others to interact with, and make better-informed decisions.” The integration comes a little over a year after Google’s acquisition of Kaggle, which was announced in March at the Cloud Next 2017 conference in San Francisco. Google claims that it’s the world’s largest online community of data scientists, with over two million users (up from 1 million in June 2017) and over 10,000 public datasets. Users compete against each other in competitions, testing techniques on real-world tasks for prize pools.

Google Cloud says Security Command Center beta is live with expanded risk-monitoring tools

Google Cloud today announced availability of its Cloud Security Command Center beta — with a series of new features designed to more quickly identify vulnerabilities and limit damage from threats or attacks. Cloud SCC offers a centralized view that gives users a clear picture of all their cloud assets, according to a blog post by Andy Chang, senior product manager for Google Cloud. “If you’re building applications or deploying infrastructure in the cloud, you need a central place to unify asset, vulnerability, and threat data in their business context to help understand your security posture and act on changes,” he wrote. The Cloud SCC was released in alpha last March with the goal of giving more users across an organization a clear view of security issues. The beta version adds:

  • Expanded coverage across GCP services such as Cloud Datastore, Cloud DNS, Cloud Load Balancing, Cloud Spanner, Container Registry, Kubernetes Engine, and Virtual Private Cloud
  • Expanded administrator roles
  • A wider range of notifications
  • Better searching of current and historic assets
  • More client libraries

Deep learning Slack bot Meeshkan wins Slush 100 startup competition

Meeshkan, a company whose Slack bot helps engineers monitor and train machine learning models without leaving the team chat app, has been named winner of the Slush 100 startup competition. Meeshkan works with popular frameworks like PyTorch and TensorFlow and is optimized for deep learning workflows. The competition took place today in Helsinki at Slush, one of the largest annual tech conferences held in northern Europe. “With our interactive machine learning product, out of the box and for free you get monitoring of all your machine learning jobs on Slack,” Meeshkan CEO Mike Solomon said during his pitch. “On top of that, you’re able to schedule as many jobs as you want right from Slack, pause long setting jobs that are executing, tweak parameters for the job that’s executing, fork a job just like you fork a repo on GitHub, and under the hood it will automatically spin up, provision a server, and send the job off and running.“ Meeshkan competed in the semifinals of Slush 100 against Aerones, a heavy-lift drone company that cleans wind turbines and wants to fight fires with drones, and Lifemote Networks, a SaaS service for internet services providers that predicts Wi-Fi troubleshooting using AI. More than 1,000 applications from 60 countries around the world were received for the startup competition, according to organizers. By supplying a tool that enables engineers and data scientists to train models without the full understanding of how to train and deploy a model from scratch, like a machine learning engineer, Meeshkan intends to help companies address the widespread shortage of data scientists. In a PricewaterhouseCoopers study released earlier this year, only 4 percent of business executives said their company has successfully implemented AI in their products or services, but that’s expected to change in the years ahead. The Slush tech conference was attended by 20,000 people. Among them: more than 1,000 investors and more than 3,000 startups. By category, the largest group of startups in attendance self-identified as AI, big data, or machine learning companies.

Hire by Google’s candidate discovery tool exits beta

Hire by Google, the hiring dashboard that’s part of Google’s enterprise-focused G Suite platform, launched a little over a year ago in June. Since then, it’s gained a feature — candidate discovery — that surfaces appropriate candidates for new gigs at a company, along with a veritable suite of AI-powered calendar scheduling, resume review, and phone call tools. Today, candidate discovery, which rolled out to select customers in beta earlier this year, is becoming generally available to all G Suite customers who pay for Hire.

Coinciding with candidate discovery’s wider launch, Google’s debuting a new capability that product manager Omar Fernandez says was informed by Hire’s beta testers. It allows customers to screen resumes with smart keyword highlighting based on their search criteria, and to re-engage qualified candidates in bulk.   “Throughout the beta period, we listened to customer feedback, and as a result [we introduced this] new feature in candidate discovery,” he wrote in a blog post. “Since the … release of candidate discovery … we’ve heard from many customers how it’s helped them quickly fill open roles at their companies … [One company] was able to fill one of its roles in 24 hours (the average time to hire is four weeks).” Two of those customers are OpenLogix, a global technology service firm, and Titmouse, an animation studio. OpenLogix uses candidate discovery to search a database of 30,000 prior candidates and create a prioritized list based on how well the candidate’s profile matches the title, job description, and location. Meanwhile, Titmouse taps it to manage thousands of applications submitted through the company’s careers page. Fernandez noted that candidate discovery is powered by Google’s Cloud Talent Solution (formerly Cloud Job Discovery), a development platform for job search workloads that factors in desired commute time, mode of transit, and other preferences in matching employers with job seekers. It also drives automated job alerts and saved search alerts. According to Google, CareerBuilder, which uses Cloud Talent Solution, saw a 15 percent lift in users who view jobs sent through alerts and 41 percent increase in “expression of interest” actions from those users. Hire by Google, for the uninitiated, is a full stack recruitment tool that lets hiring managers sift through job listings, interview and screen candidates, solicit applications, and more. It natively integrates with Gmail, Google Calendar, and Sheets, automatically filling in details such as contact information in invites and recording data captured across interviews. Moreover, thanks to artificial intelligence (AI), it’s able to recommend appropriate time slots for meetings and interviews, analyze key terms in job descriptions, and highlight candidates’ phone numbers and log calls. News of candidate discovery’s general availability follows on the heels of Google’s job search feature for military veterans, which launched in August. It aims to make it easier for service members to find civilian jobs that align with their occupation, in part by finding jobs in their area that require skills similar to those used in their military role. Companies that use Cloud Talent Solution can implement the job search feature on their own career sites.

6 things a first-time CEO needs to know

CEO turnover is on the rise across corporate America. The number of top executives who have left their jobs in 2018 has reached a 10-year high, according to outplacement firm Challenger, Gray & Christmas.

Reasons for the trend vary, the firm says, from natural movement in a tight labor market, to economic uncertainty, to a desire by companies, in light of #MeToo, “to let go of leaders that do not fit their culture or otherwise act unethically.”

Whatever the causes, the high turnover has an interesting side effect: more opportunities for executives seeking their very first CEO gigs. These vice presidents, general managers, chief financial officers, et al, typically have spent years working toward a shot at the top spot. The current direction toward fresh blood in the corner office means more can get there.

And yet most have no idea about the challenges they’ll encounter.

It’s a subject close to my heart. After several years in leadership roles at Salesforce and Oracle, I became a first-time CEO at a smaller tech company in fall 2015. I left at the beginning of 2018 as part of a corporate restructuring that would have required me and my family to relocate. Five months later, I landed my second CEO post, at another tech firm.

The first realization that smacked me in the face as a rookie: It’s a really hard job. I had naively thought being CEO would be only incrementally more difficult than other positions I’d held. Running a business unit within a company carries a lot of responsibility, right?

Yes, but it’s not even close to the same. As CEO, the buck stops here on company success (or lack thereof), culture, brand satisfaction, product quality, funding, communication with the board of directors, and a host of other priorities that don’t really sink in until you’re in the big chair. It’s all a huge thrill, but it also requires a massive adjustment in how the freshly minted CEO thinks and acts.

I reflect frequently on the lessons I learned as a first-time CEO and how I can apply them at my new company. Here are six of the most important ones, in no particular order.

1. You’re under a microscope. Get used to it. On my third day as a CEO, I visited one of our offices. A manager in one of the groups had emailed me that he’d love to get together over coffee and share some ideas. He just happened to get to me before anyone else. I replied, “Sure, let’s do it.”

Within an hour, everyone in the office of 130 people knew I had gone out for coffee with this guy as one of my first acts as CEO. It was seen as a huge stamp of approval for him and his ideas. I thought I was just meeting a new employee.

The lesson: As a new CEO, everything you do or say, big or small, is magnified. Everyone in the company notices everything. A successful CEO recognizes this and uses their bully pulpit wisely, deliberately, and selectively.

2. It can be lonely. For the first time in their career, a new CEO doesn’t get to commiserate with other people in the company. Because of the microscope effect, if the CEO says something to one person, they must assume they’re saying it to everyone. They can’t, for instance, grab a beer with the head of products and complain about the head of sales. Inevitably, the comments will get out, and the sales leader will never recover. This reality is a hard adjustment for some new CEOs.

The lesson: A CEO will often feel like Tony Soprano. To paraphrase the mob boss from the hit HBO show: “You’ve got no idea what it’s like to be Number One. Every decision you make affects every facet of every other thing … And in the end you’re completely alone with it all.”

3. You have to walk the walk, not just talk the talk on transparency. My first CEO gig was with a company experiencing business challenges. I kept repeating the message to employees that we were going to turn this thing around. I’m not sure, however, that I was transparent enough about the facts of the situation and specifically why I felt some areas were broken. I could have built more trust by sharing more.

At my new company, I put up a dashboard at our monthly all-hands meeting with all 260 employees. It’s the same one I use with the executive team and the board. It has all the numbers and facts that show how the company is performing and how we need to.

The lesson: The first step in a transparent culture is, well, transparency. It’s the CEO’s job to make sure employees are given sufficient information so they’re not just along for the ride but truly are part of the ride.

4. You’re going to be busier than you’ve ever been. Deal with it. The amount of time CEOs spend watching over the business, meeting with investors and potential ones, talking with the media, etc., can be staggering. A first-time CEO must learn quickly that they can’t allow the calendar crush to steal their approachability. “I’m so busy” are words that can never slip from a CEO’s lips.

Instead, they find ways to maximize their time. For example, I hold all-hands meetings with an open Q&A. I’ll spend a morning with, say, the customer support team as they work, to see their challenges first hand and to listen. I’ve even been known to sit in the break room with my laptop, chatting up everyone who walks in. I also send a weekly email highlighting what I feel are top accomplishments, activities, and priorities.

The lesson: Most CEOs are outgoing and approachable by nature, but if they’re not careful, the demands of being CEO can make them significantly less approachable. They can’t allow themselves to fall into that trap.

5. Balance strategy and execution. While it’s the CEO’s job to set a strategic direction for the company, he or she can’t lose sight of execution. One of the first things I did in my second CEO gig was establish a team of 30 people – not just my direct reports but other leaders responsible for execution – and we meet twice a month to examine business operations and make sure execution is aligned with strategy.

The lesson: CEOs must consistently insert themselves into the execution conversation.

6. Set the right example for work-life balance. At 4 PM every Tuesday, I leave work to coach my kids’ youth hockey team. It’s an inconvenient time for a CEO to step away, but my family is my No. 1 priority. I explain to employees why I do this, how technology allows me to time shift my work to later at night, and that they are entitled to the same flexibility.

The lesson isn’t “work hard, play hard,” which sounds like a fraternity-type culture. It’s reinforcing the message that employees don’t need to be sitting on email from 4 to 5 if their kid has soccer practice. Go be with your kids and catch up on work later in the day. What’s good for the CEO is good for everyone else.

According to LinkedIn, 12,000 of its members share the title of CEO at companies with more than 50 employees. CEO is an exclusive club, and it can be daunting for those entering it for the first time. But by knowing what they’re getting into, and exercising the strong judgment and careful thought that got them to the top spot in the first place, rookie CEOs can knock it out of the park.

AI is moving B2B tech from consumerization to humanization

You make a call to a telecom operator over a billing-related query. Or you log on to a shopping site to check the status of your shipment. You are likely to be instantly greeted by an automated voice that will guide you, while a friendly chat box pops up on the shopping site to provide instant solutions. There’s nothing new about this; we’ve been hearing automated voice responses and seeing chatbots on websites and apps for quite some time now. So, what’s the difference? If the automated message on your screen was once “Error 856. Wrong Input,” it has now evolved to something relatable, conversational, and polite, like, “I’m sorry. Could you please repeat your issue?” AI technology is getting groomed, suiting up, and presenting itself with the best, most human foot forward. Technology is becoming increasingly human — and here’s why.

Consumerization of technology: a revolution in customer experience

Gone are the days when B2B software was clunky, frustrating and hard to use. Competition in the B2B space and the demand for instant solutions and quick fixes paved the way for the consumerization of B2B technology — bringing it close on the heels of B2C innovations. The impact of this has been so rapid and distinct that the lines have blurred between business and personal technology. Today, tech innovation in the B2B industry is driven by the determination to give every customer the best possible customer experience — delivering “wow” moments. The result? Customer expectations have changed: People now want to have their problems solved, but they’re not interested in being aware of the tech involved in the solution. They don’t want an overload of potentially irrelevant information tailored for the masses. They want personalized, intuitive, and accessible solutions for their needs. This has naturally led to an increased emphasis on AI. AI meets three critical needs in the customer service landscape today: personalization, contextual intelligence, and immediate responses. Today, AI has become a mainstay in the B2B industry — much like in the B2C space — and is bringing disruption like never before. Business software is addressing changing customer expectations by becoming seamlessly integrated with human customer service — so much so that the technology appears human in behavior.

Humanization and AI: The merger

Humanization is the next logical step toward a better user experience, and AI is rightly riding this wave, with solutions becoming increasingly like human interactions. What does the future look like? The ideal customer experience will seamlessly combine the ingenuity of AI along with the empathy of human engagement. This is the Holy Grail that customer experience and product team engineers are working toward. For our part at Freshworks, we recently launched a friendly canine partner, Freddy — our conversational AI omnibot. Freddy learns from the Freshworks records of customer interactions across marketing, sales, and support, and automatically replies to common queries in email, chat, voice-calls, and even social media with the appropriate content. This enables sales and support teams to focus on more complex, high-value inquiries. Being a SaaS firm, we are increasingly looking to understand and serve not only the immediate and direct customers of our products, but also the end-user of each business that engages with our software. Today, our lens has expanded to include the needs of the direct customer as well as the impact it has on the customer’s customer. And omnibots like Freddy, are possibly still only the beginning.

The future of AI: What’s to come?

AI is making life easier for us and our customers, with its gamut of features such as enhanced security, predictive analytics, automation, assistance in deploying code, and more. However, the innovations to look for are the advances in chatbots, voice recognition, and natural language processing in AI, which are gaining momentum due to the increased need for technology to be more human. AI is here to stay, but this does not mean that humans are replaceable. It merely means that an integration is underway. And when this integration becomes seamless, technology will become closer to becoming a part of our identity. We’ll welcome that with open arms. STS Prasad, the Senior VP of Engineering at Freshworks, is a technology entrepreneur, also known for scaling technology organizations and platforms for business growth.

Uber confidentially files for IPO a day after Lyft

(Reuters) — Uber Technologies has filed paperwork for an initial public offering, according to three people with knowledge of the matter, taking a step closer to a key milestone for one of the most closely watched and controversial companies in Silicon Valley. The ride-hailing company filed the confidential paperwork on Thursday, one of the sources said, in lock-step with its smaller U.S. rival, Lyft, which also announced on Thursday it had filed for an IPO. The simultaneous filings extend the protracted battle between Uber and Lyft, which as fierce rivals have often rolled out identical services and matched each other’s prices. Uber is eager to beat Lyft to Wall Street, according to sources familiar with the matter, a sign of the company’s entrenched competitiveness. Its filing sets the stage for one of the biggest technology listings ever. Uber’s valuation in its most recent private financing was $76 billion, and it could be worth $120 billion in an IPO. Its listing next year would be the largest in what is expected to be a string of public debuts by highly valued Silicon Valley companies, including apartment-renting company Airbnb and workplace messaging firm Slack. Ongoing market volatility, however, could alter companies’ plans. The IPO will be a test of public market investor tolerance for Uber’s legal and workplace controversies, which embroiled the company for most of last year, and on Chief Executive Dara Khosrowshahi’s progress in turning around the company. Khosrowshahi took over just over than a year ago, and has repeatedly stated publicly he would take Uber public in 2019. In August, he hired the company’s first chief financial officer in more than three years. Together, Uber and Lyft will test public market investor appetite for the ride-hailing business, which emerged less than a decade ago and has proven wildly popular, but also unprofitable. Uber in the third quarter lost $1.07 billion and is struggling with slowing growth, although its gross bookings, at $12.7 billion, reflect the company’s enormous scale. Its revenue for the quarter was $2.95 billion, a 5 percent boost from the previous quarter. Its bookings grew just six percent for the quarter. Uber has raised about $18 billion from an array of investors since 2010, and it now faces a deadline to go public. An investment by SoftBank that closed in January, which gave the Japanese investor a 15 percent stake in Uber, included a provision that requires Uber to file for an IPO by Sept. 30 of next year or the company risks allowing restrictions on shareholder stock transfers to expire. Uber has not formally chosen underwriting banks, although Morgan Stanley and Goldman Sachs are likely to get the lead roles, sources told Reuters. Lyft hired JPMorgan Chase & Co, Credit Suisse and Jefferies as underwriters. The Wall Street Journal reported Uber’s filing earlier on Friday.

History of Scandal

Becoming a public company will bring a heightened level of investor scrutiny and exposure to Uber, which suffered a string of scandals when the company was led by co-founder and former CEO Travis Kalanick, who resigned last year. The controversies included allegations of sexual harassment, obtaining the medical records of a woman raped by an Uber driver in India, a massive data breach, and federal investigations into issues including possibly paying bribes to officials and illicit software to evade regulators. Khosrowshahi and his leadership team have worked to reset the workplace culture and clean up the messes, including settlements with U.S. states over the data breach and with Alphabet’s self-driving car unit, Waymo, which had sued Uber for trade-secrets theft. Uber today is a different company than the vision its founders pitched to early investors, which helped it become the most highly valued venture-backed company in the United States. After concessions in China, Russia and Southeast Asia, where Uber sold its business to a local competitor, and the prospect of another merger in the Middle East, Uber is far from being the dominant global ride-hailing service it set out to be. Still, Uber operates in more than 70 countries, while Lyft is in the U.S. and Canada, although the smaller company is plotting a global expansion. Uber has also added a number of other businesses, which are growing but have yet to show sustainable profits, in a bid to become a one-stop mobility app. Those include freight hauling, food delivery and electric bike and scooter rentals. Meanwhile, its self-driving car unit is costing the company about $200 million a quarter, according to investors, but Uber’s program has retrenched since one of its autonomous cars killed a pedestrian in March.

Google’s Gradient Ventures leads $7 million investment in Wise Systems to automate routing for shippers

Wise Systems, which develops autonomous routing and dispatch software for delivery fleets, has raised $7 million in a series A round of funding led by Gradient Ventures — Google’s AI-focused investment fund. Additional participants in the round include Neoteny, E14 Fund, Trucks Venture Capital, and Fontinalis Partners. Founded out of Cambridge, Massachusetts in 2014, Wise Systems targets retailers, distributors, shippers, and couriers with software that automates many of the processes involved in getting goods from A to B. This includes scheduling and monitoring routes, with a built-in mechanism that rearranges stops and adjusts routes in real time based on anticipated delays caused by traffic and other factors. It leans heavily on machine learning smarts to monitor data points and improve delivery fleets’ performance over time. Feeding into this is a mobile app for drivers, which lets them log arrival and departure times, capture signatures, and record notes manually, while on the customer-facing end a hub tracks delivery status and real-time arrival schedules. Wise Systems had previously raised around $1 million, and with another $7 million in the bank it plans to double down on its operational growth and invest in R&D for AI-powered delivery management. “Autonomous dispatch and routing is the next-generation technology that logistics professionals need to meet the increasingly complex requirements of the rapidly evolving economy,” noted Wise Systems CEO Chazz Sims. “Our technology positions fleets to meet today’s and tomorrow’s needs, reshaping delivery in the $10 trillion logistics and transportation industry.” Google parent company Alphabet has operated a number of investment funds for some time already, but last July Google itself announced a new venture fund targeting early-stage AI startups. Since its inception, Gradient Ventures has invested in fewer than a dozen startups, including biomedical startup BenchSci.

Big money

The delivery and logistics industry is ripe for investment, with a number of big technology companies pushing resources into the sphere. Walmart recently launched Spark Delivery, a crowdsourced pilot delivery program similar to Amazon Flex, while Amazon announced that it is looking to create a network of independent delivery fleets to bolster its existing network of third-party providers. Last year, Uber launched its Uber Freight trucking business as it sought to expand beyond its consumer ride-hailing and food delivery businesses. As it happens, Uber revealed last week that it will begin using machine learning to provide shippers with accurate rates up to two weeks in advance. Meanwhile, Google has also invested in the delivery and logistics realm — in October it joined a $40 million investment in same-day delivery platform Deliv. Given activity elsewhere in the technology industry, it should come as little surprise that Google’s AI-focused venture fund would seek to invest in a company like Wise Systems. “With their applications of machine learning already delivering bottom-line savings and top-line expansion features to customers, we are excited to support Wise Systems’ continued growth,” added Gradient Ventures managing partner Anna Patterson. “Wise’s large customers are redefining their abilities and re-setting expectations of last mile delivery for the on-demand age.”

iOS 12 passes 70% adoption in 77 days while Android Pie remains a mystery

Uptake of Apple’s iOS 12 operating system for iPhones, iPads, and iPod touches continues to grow, a new Apple update revealed today, as a full 70 percent of all iOS devices are now on the latest major release. The milestone was reached December 3, or 77 days after the operating system’s September 17 debut. Once again, Apple has provided two sets of statistics — one for all devices, and a second, higher number of 72 percent adoption for devices “sold in the last four years.” In both cases, the gains came largely at the expense of iOS 11, which lost around 10 percent of its users to iOS 12, while earlier versions of iOS continue to hold onto a less than 10 percent share — a shrinking 9 percent of all devices, versus a stable 7 percent of devices sold in the last four years. Unlike Apple, which has provided continued updates showing the growth of iOS 12, Google’s Android developers page hasn’t been updated since late October, when Android 9 Pie apparently represented under 0.1 percent of the Android userbase. Google released Pie in early August 2018, and most recently put out an updated version just yesterday, though relatively few Android devices appear to be running the release. Android 9’s numbers could go up over the next month or two, however, as new devices are sold through the holidays with the latest OS preinstalled.

Microsoft unveils autoscaling for Azure Kubernetes Service and GPU support in Container Instances

At Microsoft Connect(); 2018 today, Microsoft launched a bevy of updates across its cloud, artificial intelligence (AI), and development product suites. That’s no exaggeration — the company made its Azure Machine Learning service generally available, enhanced several of its core Azure IoT Edge offerings, and debuted Cloud Native Application Bundles alongside the open source ONNX Runtime inferencing engine. And that only scratched the surface of today’s small mountain of announcements. Microsoft also took the wraps off improvements to virtual nodes in Azure Kubernetes Service (AKS) and autoscaling for AKS. And it made available JavaScript and Python support in Azure Functions, its serverless compute service that allows customers to run code on-demand without having to explicitly provision or manage infrastructure, in addition to a new Azure API Management Consumption tier. First off, Azure Kubernetes Service (AKS), which let developers elastically provision quick-starting pods inside of Azure Container Instances (ACI), is available in public preview starting today. Customers can switch it on within the Azure portal, where it cohabitates in the virtual network with other resources. Also launching today: the aforementioned cluster autoscaling for AKS. It’s based on the upstream Kubernetes cluster autoscaler project and automatically adds and removes nodes to meet workload needs (subject to minimum and maximum thresholds, of course). It works in conjunction with horizontal pod autoscaling and enters public preview this week. Dovetailing with those announcements, Microsoft debuted graphics processing unit (GPU) support in ACI, which lets developers run demanding jobs — such as AI model training — containerized in accelerated virtual machines. And the Seattle company detailed the Azure Serverless Community Library, an open source set of prebuilt components based on common use cases, such as resizing images in Blob Storage, reading license plate numbers with OpenALPR, and conducting raffles, all of which can be deployed on Azure subscriptions with minimal configuration. GPU support in ACI launches in preview today, and the Azure Serverless Community Library is available on GitHub and the Serverless Library website. Rounding out today’s news, Microsoft took the wraps off a new consumption plan for Linux-based Azure Functions, which was teased at Microsoft Ignite earlier this year. Now it’s possible to deploy Functions built on top of Linux using the pay-per-execution model, enabling serverless architectures for developers with code assets or prebuilt containers. Finally, Microsoft launched Azure API Management (APIM) — a turnkey solution for publishing APIs to external and internal customers — in public preview in a consumption-based usage plan (Consumption Tier). Effectively, it allows APIM to be used in a serverless fashion, with instant provisioning, automated scaling, and pay-per-play pricing. Along with the new APIM plan is the general availability of Python support (specifically Python 3.6) on Azure Functions runtime 2.0, and JavaScript support for Durable Functions, an extension of Azure Functions and Azure WebJobs that manages state, checkpoints, and restarts in serverless environments.

Cambridge Consultants’ DeepRay uses AI to reconstruct frames from damaged or obscured footage

Rain. Smoke. Dirt. Debris-produced distortion would normally spell doom for a videographer, but researchers at global research and development firm Cambridge Consultants say they’ve harnessed artificial intelligence (AI) to reconstruct footage from damaged or obscured frames in real time. In one test on airfields and aviation stock video, it was able to accurately reproduce aircraft on a runway.The AI system, dubbed DeepRay, will be fully detailed at the upcoming 2019 Consumer Electronics Show in January. It calls to mind Adobe’s distortion-correcting system for front-facing smartphone cameras, and an Nvidia technique that can “fix” corrupt images containing holes. But unlike most previous AI, DeepRay handles live video.“Never before has a new technology enabled machines to interpret real-world scenes the way humans can — and DeepRay can potentially outperform the human eye,” Tim Ensor, commercial director for artificial intelligence at Cambridge Consultants, told VentureBeat in a phone interview. “The ability to construct a clear view of the world from … video, in the presence of continually changing distortion such as rain, mist, or smoke, is transformational.”DeepRay — a product of Cambridge Consultants’ Digital Greenhouse internal incubator — leverages a machine learning architecture called a generative adversarial network (GAN) to effectively invent video scenes as it attempts to remove distortion. Broadly speaking, GANs are two-part neural networks consisting of generators that produce samples and discriminators that attempt to distinguish between the generated samples and real-world samples. In DeepRay’s case, a total of six networks — a team of generators and discriminators — compete against each other.Research in GANs has advanced by leaps and bounds in recent years, particularly in the realm of machine vision. Google’s DeepMind subsidiary in October unveiled a GAN-based system that can create convincing photos of food, landscapes, portraits, and animals out of whole cloth. In September, Nvidia researchers developed an AI model that produces synthetic scans of brain cancer, and in August, a team at Carnegie Mellon demonstrated AI that could transfer a person’s recorded motion and facial expressions to a target subject in another photo or video. More recently, scientists at the University of Edinburgh’s Institute for Perception and Institute for Astronomy designed a GAN that can hallucinate galaxies — or high-resolution images of them, at least.Ensor contends that only in the past two years has it been possible to train multi-network GANs at scale, in large part thanks to advances in purpose-built AI chips such as Google’s Tensor Processing Units (TPUs).“We’re excited to be at the leading edge of developments in AI. DeepRay shows us making the leap from the art of the possible, to delivering breakthrough innovation with significant impact on our client’s businesses,” Ensor said. “This takes us into a new era of image sensing and will give flight to applications in many industries, including automotive, agritech and healthcare.”At CES, the DeepRay team will demonstrate a neural network trained on Nvidia’s DGX-1 platform that, running on a standard gaming laptop, can remove distortion introduced by an opaque pane of glass. The dataset consists of 100,000 still images, but Ensor said that the team hasn’t characterized the system’s performance with larger sample sizes.“As with all [AI models], it continues to improve with training,” he explained, “and it will degrade gracefully.”In a press release today, the Linux Foundation’s Cloud Native Computing Foundation, which oversees Kubernetes, said the latest release includes simplified cluster management with Kubeadm, a container storage interface (CSI), and CoreDNS as Default DNS. At just 10 weeks since the last release, CNCF says this is one of the service’s shortest development cycles.Read more: How developers want to use Kubernetes and microservices to make the web faster, stable, and more openKubernetes is a foundational part of an effort by CNCF to change the way developers write applications for internet-based services. Cloud-based computing calls for breaking out separate features, or “microservices,” and placing them in containers that have all the necessary pieces for an application to run in one package.The development philosophy holds that breaking applications into smaller, self-contained units can significantly reduce costs and the time needed to write, deploy, and manage each one.Kubernetes, originally developed by Google but donated to the Linux Foundation, is used to manage deployment of microservices. In the press release, CNCF notes of the three main features added to Kubernetes 1.13: “These stable graduations are an important milestone for users and operators in terms of setting support expectations.”

How Apple’s HomePod uses AI and 6 mics to hear users through ambient noise

Apple’s HomePod hasn’t won much praise for the capabilities of its integrated digital assistant Siri, but it does have one undeniably impressive feature: the ability to accurately hear a user’s commands from across the room, despite interference from loud music, conversations, or television audio. As the company’s Machine Learning Journal explains today, HomePod is leveraging AI to constantly monitor an array of six microphones, processing their differential inputs using knowledge gained from deep learning models.One of the biggest challenges in discerning a user’s commands over ambient noise is overcoming the HomePod itself: Apple’s speaker can perform at very high volumes, and its microphones are immediately adjacent to the noise sources. Consequently, the company explains, there’s no way to completely remove the HomePod’s own audio from the microphones — only part of it.Instead, Apple used actual echo recordings to train a deep neural network on HomePod-specific speaker and vibration echoes, creating a residual echo suppression system that’s uniquely capable of cancelling out HomePod’s own sounds. It also applies a reverberation removal model specific to the room’s characteristics, as measured continuously by the speaker.Another interesting trick uses beamforming to determine where the speaking user is located, focus the microphones on that person, and apply sonic masking to filter out noises from other sources. Apple built a system that makes judgments about local speech and noise statistics based solely on the microphones’ current and past signals, focusing on the speech while trying to cancel out interference. It then trained the neural network using a variety of common noises that ranged from diffuse to directional, speech to noise, so that the filtering could apply to a large number of interference sources.

HomePod’s other impressive capability is determining which of multiple speaking people is the correct target for commands, to steer the beamforming mics and isolate noise. One trick is using the required “Hey Siri” trigger phrase to determine who and where commands are coming from, but Apple also developed techniques to separate competing talkers into individual audio streams, then use deep learning to guess which talker is issuing commands, sending only the stream focused on that talker for processing.

The Machine Learning Journal’s entry does a great job of spotlighting how AI-assisted voice processing technologies are necessary but not sufficient to guarantee a great experience with far-field digital assistants. While all of the techniques above do indeed yield quick, reliable, and accurate Siri triggering, HomePod’s limited ability to actually respond fully to requests was a frequent target of complaints in reviews. If there’s any good news, it’s that the issues appear to be in Siri’s cloud-based brain rather than in the HomePod’s hardware or locally run services, so server-side patches could dramatically improve the unit’s functionality without requiring users to buy new hardware.

Intel open-sources HE-Transformer, a tool that allows AI models to operate on encrypted data

As any data scientist will tell you, datasets are the lifeblood of artificial intelligence (AI). That poses an inherent challenge to industries dealing in personally identifiable information (e.g., health care), but encouraging progress has been made toward an anonymized, encrypted approach to model training.

Today at the NeurIPS 2018 conference in Montreal, Canada, Intel announced that it has open-sourced HE-Transformer, a tool that allows AI systems to operate on sensitive data. It’s a backend for nGraph, Intel’s neural network compiler, and based on the Simple Encrypted Arithmetic Library (SEAL), an encryption library Microsoft Research also released in open source this week.

The two companies characterized HE-Transformer as an example of “privacy-preserving” machine learning.

“HE allows computation on encrypted data. This capability, when applied to machine learning, allows data owners to gain valuable insights without exposing the underlying data; alternatively, it can enable model owners to protect their models by deploying them in encrypted form,” Fabian Boemer, a research scientist at Intel, and Casimir Wierzynski, Intel’s senior director of research, wrote in a blog post.

The “HE” in HE-Transformer is short for homomorphic encryption, a form of cryptography that enables computation on ciphertexts — plaintext (file contents) encrypted using an algorithm. It generates an encrypted result that, when decrypted, exactly matches the result of operations that would have been performed on unencrypted text.

HE is a relatively new technique — IBM researcher Craig Gentry developed the first fully HE scheme in 2009. And as Boemer and Wierzynski note, designing AI models that use it requires expertise in not only machine learning but encryption and software engineering.

HE-Transformer aids in the development process by providing an abstraction layer that can be applied to neural networks on open source frameworks such as Google’s TensorFlow, Facebook’s PyTorch, and MXNet. It effectively eliminates the need to manually integrate models into HE cryptographic libraries.

HE-Transformer incorporates the Cheon-Kim-Kim-Song (CKKS) encryption scheme and addition and multiplication operations, such as add, broadcast, constant, convolution, dot, multiply, negate, pad, reshape, result, slice, and subtract. Additionally, it supports HE-specific techniques, like plaintext value bypass, SIMD packing, OpenMP parallelization, and plaintext operations.

Thanks to those and other optimizations, Intel claims that HE-Transformer delivers state-of-the-art performance on cryptonets — learned neural networks that can be applied to encrypted data — using a floating-point model trained in TensorFlow.

“We are excited to work with Intel to help bring homomorphic encryption to a wider audience of data scientists and developers of privacy-protecting machine learning systems,” said Kristin Lauter, principal researcher and research manager of cryptography at Microsoft Research.

Currently, HE-Transformer directly integrates with the nGraph compiler and runtime for TensorFlow, with support for PyTorch forthcoming. Deep learning frameworks that are able to export neural networks to ONXX — such as PyTorch, CNTK, and MXNet — can be used by importing models into nGraph in ONXX and exporting them in a serialized format.

Boemer and Wierzynski said that future versions of HE-Transformer will support a wider variety of neural network models.

“Recent advances in the field have now made HE viable for deep learning,” they wrote. “Researchers can leverage TensorFlow to rapidly develop new HE-friendly deep learning topologies.”

Verizon plans Samsung 5G phone with Qualcomm modem for first half of 2019

Just days after helping three cellular carriers initiate 5G service in South Korea, Samsung made an interesting announcement in partnership with U.S. carrier Verizon: The companies plan to release “one of the first commercial 5G smartphones” in the first half of 2019, apparently powered not by Samsung’s own chipset, but instead by Qualcomm’s Snapdragon platform, including its X50 5G modem.

Today’s announcement is significant because Samsung has been working on its own 5G technologies, including mobile chips, and was a major contributor to the international 5G standard that was finalized late last year and earlier this year. Yet Verizon indicates that Samsung will be showing a “proof of concept” device this week, using Qualcomm’s upcoming flagship mobile platform, antenna modules, and related components.

The use of Qualcomm parts simultaneously reflects the San Diego company’s apparent strength in the emerging 5G chip market and the continued struggles of even its largest rivals to miniaturize the high-speed mobile chips. Facing thermal and battery issues with its first 5G chips, chief rival Intel recently pushed up development of a second-generation 5G chipset that is expected to be used in iPhones and other devices in 2020. Chinese developer Huawei has shown giant heatsinks to work around its own 5G modem’s thermal issues. By comparison, numerous companies have signed up to use Qualcomm’s parts.

Verizon says that its mobile 5G service “will go live in early 2019 and expand rapidly.” The company is already offering fixed 5G home broadband servicein Houston, Indianapolis, Los Angeles, and Sacramento, with reported typical speeds up to twice the company’s promised 300Mbps norm, approaching its 1Gbps peak.

Samsung’s 5G phone is unlikely to be the first mobile device on Verizon’s network. Verizon previously announced that Motorola will release the 5G Moto Mod, a Snapdragon X50-equipped 5G backpack, for the Moto Z3 in “early 2019.” The Moto Z3 is already available, making the addition of 5G capabilities as easy as snapping the accessory onto the back of the existing phone.

How Nvidia researchers are making virtual worlds with real-world videos

The team at Nvidia was initially inspired to take this approach by the work of Alexei Efros and other researchers at University of California, Berkeley, and by their creation of the Pix2Pix system. Nvidia created Pix2PixHD in response, working in tandem with AI practitioners from UC Berkeley. Earlier this year, UC Berkeley researchers also produced models that were able to dance and do flips and 20 other acrobatic moves.

“I think this is the first interactive AI rendering, and we’re really proud of the progress we’ve made. But it is early stage, and I think there’s going to be a lot of progress to make the output higher quality and more general so that we can handle more kinds of scenes. And so I’m really excited about where this is going to go in the future,” Catanzaro said.

Leave a Reply