In October, the Fujitsu Laboratories Advanced Technology Symposium 2019 (FLATS) gathered experts, entrepreneurs and government officials from around the world in Santa Clara, Calif. to share their insights on the latest issues lying at the intersection of advanced technology and business. The symposium, titled, "Driving a Trusted Future: Enriching Experiences while Protecting Data," was the 13th annual event of its kind. Organizer Fujitsu Laboratories also shared results of its research during the event.
"Data, as valuable as oil"
Underpinning the conference theme is the widely shared belief that, as one speaker put it, "data is the oil of the 21st century."
In this new era, there are key questions that companies need to answer. Among them is: how can businesses analyze user behavior and operational data to generate new ideas and new business processes? Other issues include the social challenges of how companies can benefit from their data. Frequent data breaches are causing anxiety among the general public, and excessive use of data by companies is fueling widespread mistrust.
In May 2018, the European Union implemented the General Data Protection Regulation (GDPR) to protect personal data and as a measure to address privacy concerns and other worries about use of personal data. The same fears are spreading around the world. For example, in California, lawmakers adopted the California Consumer Privacy Act (CCPA), which is scheduled to take effect in January 2020.
Companies undertaking digital transformation (DX), which treats data as a critical asset, must ensure the security of their information. Data protection is now an essential requirement for companies to meet as they undertake DX. What kind of rules should companies establish and what socio-economic factors should they consider when meeting this requirement? Any company that pursues DX is obligated to consider these issues. Fujitsu is one of the companies driving digital transformation and takes its responsibility to protect data seriously. That’s why Fujitsu Labs focused on the theme of data protection in its latest FLATS event.
Three key data protection technologies
Hirotaka Hara, CEO and representative director of Fujitsu Labs, began his keynote speech by observing that "data is valuable just like oil, but it’s also explosive like oil if you don’t handle it carefully." As much as 175 ZBs (zettabytes) of data is expected to be generated by 2025. Hara cited a study showing the public has concerns over how this information is being used, and emphasized the need for companies to regain trust when it comes to handling user data.
Keynote speech by Hirotaka Hara, CEO and representative director, Fujitsu Laboratories Ltd.
In order to accomplish this, Hara emphasized that companies need to support strong safeguards for data used in a wide range of environments – from cyberspace to the physical world – due to today’s stakeholders being all interconnected. Then he gave examples of three enabling technologies that can be used for this task: XAI (explainable AI), IDYX (identity exchange) and privacy risk assessment.
These technologies, respectively, promote accountability for AI, give users more control over their personal data and help companies assess risks that come with implementations of new technology.
XAI: Explainable AI
XAI helps to track the decision-making process behind actions taken by AI systems. To do this, XAI combines three components: Wide Learning, which derives useful hypotheses from a list of combinations of all data items; Deep Tensor, which analyzes graph structure data; and Knowledge Graph, which collects specialized information, such as academic papers.
IDYX: Identity Exchange
IDYX allows users to determine the authenticity of personal information distributed online and to give users control over their digital identity. Fujitsu Laboratories has started a joint research project on digital identity with Japan-based payments network, JCB.
Privacy risk assessment
Fujitsu Laboratories is also developing technology to assess the risks of improper use and exposure of personal data. Data must be protected by ensuring that users can remain anonymous as their data is processed, even though complete anonymity could limit the usefulness of the data. What is clear is that the damage to a company if sensitive, identifiable data is leaked could be enormous. Although it’s very difficult for non-experts to choose the appropriate level of anonymity for processing user data, privacy risk assessment technology makes it possible for them to select the optimal level for particular data sets. This technology helps companies to visualize how privacy risks could be reduced and how much damage would be caused if the data were to be leaked.
Achieving proactive security
Currently, Fujitsu Laboratories is focusing on building technology to enable proactive security. "Powerful security technologies allow you to process data in ways you never thought possible before," said Hiroshi Tsuda, head of the Security Research Laboratory & Blockchain Research Center at Fujitsu Laboratories, during the symposium. "It’s more than just protecting the data."
Hiroshi Tsuda, Head of the Security Research Laboratory & Blockchain Research Center, Fujitsu Laboratories Ltd.
In the past, data was protected from cyberattacks by encryption. However, this can limit the value of the data when it’s processed and used, he said. If companies employ anonymization and confidentiality when processing data, and handle the data correctly and transparently, it will bring benefits not only to companies but also to individual users. The challenge is, "how to strike a balance between use and protection," said Tsuda. That’s why Fujitsu Laboratories is focusing on encryption, anonymization and blockchain technology.
Technology fails to keep up with policies
Professor Daniel Weitzner, director of the Internet Policy Research Initiative at MIT and principal research scientist at the school’s Computer Science and Artificial Intelligence Lab, spoke about technology in relation to policies. During his keynote address, he said
"It’s often said that policies are not keeping up with technology, but in reality, the opposite is true. Technology has not caught up. So, in a sense, there's a huge opportunity ahead."
Professor Daniel Weitzner, Director of the MIT Internet Policy Research Initiative
Weitzner, who also served as deputy CTO for the White House during the Obama administration, spoke on the theme of "Data Governance Challenges," focusing on cybersecurity, privacy and reliable AI. He gave a number of examples illustrating issues of data governance. This included the inability to access the risk of attacks because of fragmented security measures across companies, problems with encryption and government surveillance, and growing skepticism among stakeholders about very large technology platforms – such as GAFA (Google, Apple, Facebook and Amazon). Another key issue was the misuse of personal data – doubtless due to the widely-reported Cambridge Analytica scandal, in which the personal data of millions of Facebook users was harvested for political ads without consent.
Weitzner said a particularly pressing concern is having the ability to track the context and purpose of data usage. "We believe that it is essential to be able to confirm that personal data has been used only as originally intended and has not been diverted," he said.
Still, data governance is a difficult task. He pointed out that "legislation on data governance often does not formally set requirements." So from the start, it’s difficult to adapt such requirements to computer programs. "Also, privacy and security issues are evolving and companies and other responsible parties don’t have experience in dealing with them. Even if technologies become available, it still takes time for developers to recognize what society expects," Weitzner said.
He stressed that most companies do not intend to deceive users. On the contrary, they want to maintain a good relationship with them. So they are on the lookout for policies and procedures for the safe handling of data. In the future, as technology matures, users may be able to keep their personal data in a sort of trusted framework, just as they deposit money in their bank accounts today. Then they will be able to clearly see how their personal data was used, just as they can see transactions in their account books.
The MIT Internet Policy Research Initiative is currently working with policy makers and technology experts to establish an Internet Bill of Rights that would work as a legal framework for data governance, Weitzner said. It is also studying user interactions from a more in-depth perspective to understand what is most beneficial to users.
Will users ever really control their personal data?
Experts weighed in on this and related topics during a panel discussion, which focused on both the technology and business sides of the issues.
Panelists from NTT Laboratories, Google, and Stanford University shared their thoughts and ideas on such topics as data processing with encryption, privacy, and security for machine learning.
A panel weighed in on the theme of "Data Protection Technologies, Enabling New Innovative Services."
On the business side, panelists discussed such market implementations as Disney’s "MagicBand" bracelet, which collects and analyzes user data to improve the customer experience at the company’s theme parks, and Coca-Cola’s flavor-mixing machines, which gathers customers’ preference data for use by the company to develop new products.
The founder of LunaDNA, which established a new business model in the medical field whereby the bio-data of shareholder DNA donors is used for research, also joined the panel discussion.
Following the discussion, Doc Searls, an advocate for a user-centric personal data management system called Vendor Relationship Management (VRM), gave the final keynote of the conference.
He noted that in the digital age, consumers have increasing exposure to marketing companies and control of their personal data is being stripped away. But Searls observed that the relationship between consumers and marketers is changing. In the past, companies were able to establish relationships with a large number of consumers. But now, individual users are establishing relationships with a large number of companies. VRM enables users to take control of how they interact with enterprises, so they are in charge of how their personal data is to be used.
Doc Searls, advocate of Vendor Relationship Management, VRM
He pointed to the example of Pico Labs, which built a "My Internet of Things" service. By tagging users’ belongings with their own QR codes and creating cloud-based accounts, users can receive user manuals from manufacturers, keep track of item repairs and even send a thank you note to the person who happened to find a user’s lost item. Searls predicted this will change the relationship between individuals and goods, and between individuals and brands.
The event also offered a technology showcase to demonstrate 12 technologies related to the conference theme. These included an IDYX platform that allows users to safely conduct a cross-industry exchange of personal data; a secret search technology that can enhance privacy-oriented businesses, such as medical services using encrypted data processing; and a privacy risk management program that can safeguard the privacy of active user data.
Exhibition hall at FLATS
One of the main conclusions of the event was that personal data must be protected while at the same time being used to benefit both the individual and the larger society. Moreover, as technologies evolve, the perception of personal data itself will change.
Freelance editor and journalist
Takiguchi lives in Silicon Valley. She writes extensively about technology, business, politics, international relations, design, and architecture. A special focus of her work has been trends in use of robotics in Silicon Valley and elsewhere in the United States. She publishes news about robotics through her own website robonews.net. Her publications include, Why is Garbage Not Sorted in Silicon Valley? (President Inc.), Activism: Rem Koolhaas Document (TOTO Publishing), and Japanese Architect: Observing Toyo Ito (Chikumashbo Ltd.). She also has translated such works as Is Artificial Intelligence an Enemy or a Friend? (Nikkei BP) and Software Masters (Pearson Education). She graduated from Japan’s Sophia University in German Studies. From 1996 to 1998, she attended the journalists program in the Computer Science Department of Stanford University, as a visiting researcher on a Fulbright scholarship.
Cleantech Institute, Nikkei BP Intelligence Group
Hayashi graduated in 1985 from the School of Engineering, Tohoku University, and is employed by Nikkei Business Publications Inc. He served as assistant editor-in-chief and reporter in the fields of ITC technologies, product development and standardization for several Nikkei publications. Starting in 2002, he served as editor-in-chief for Nikkei Byte. And starting in 2007, he worked as editor-in-chief of Nikkei Communication. After serving as publisher of ITpro, Nikkei Systems, Tech-On!, Nikkei Electronics, Nikkei Monozukuri and Nikkei Automotive, he became the chief of foreign operations of Nikkei BP in January 2014. Since September 2015 he has been serving in his current position, as chief researcher for Nikkei BP's Cleantech Institute. Since August 2016, he has authored a column titled, "The Future of Self-Driving" for The Nikkei Online Edition. He published a Comprehensive List of Global Self-Driving Projects in December 2016, a Comprehensive List of Global Self-Driving/Connected Car Developments in December 2017 and a Complete Guide for Automated Driving in Q&A Style in June 2018. He has also been serving as a judge for the CEATEC award committee since 2011.