Personal tools
You are here: Home EITA Emerging Technologies Research and Ventures New Media and New Digital Economy Research Topics in New Media and New Digital Economy The Theme - New Media and New Digital Economy Ventures

The Theme - New Media and New Digital Economy Ventures

San_Francisco_California_072314
(San Francisco, California, U.S.A. - Jeffrey M. Wang)


"The Rise of the New Digital Economy: Trends, Opportunities and Challenges"




<DRAFT>
 


1. Leading Digital Transformation and Innovation

 

"Developing new economic and business models that are digitally driven, creating sustainable value for an inclusive economy."

-- [World Economic Forum]


- New Media - An Increasingly Virtual World

On October 29th, 1969, a message was sent from a room at UCLA in Southern California to a Stanford Research Institute computer console in Menlo Park, California. It read simply “Lo,” though it was supposed to say “Login.” The system crashed before completing the task. This was the world’s first message sent via an interconnected computer network known as ARPANET. On this unassuming fall day, The modern Internet as we now know was conceived. In the five decades since, the Internet has transformed human existence. From how we wage war to how we make each other laugh, it’s unfathomable how much the Internet has shaped life in that short amount of time.  

Today, comparing the Internet to electricity as it becomes an “omnipresent utility, something we expect to always be available and around us, intertwined in our daily lives. The world in front of us will be a mix of reality and the virtual and, at times, it will be unable to decipher which one is which. In the near future, everyone will wear augmented reality (AR) glasses and use that to interact with their environment. Information will be displayed, floating in the air, the web will appear in the real world, not just on glass screens. Real-time information will forever be present for everything - and everyone - you cross paths with. Strangers will be identified, with increasingly detailed information about them presented,” People will subscribe to different augmentations, much as we now subscribe to magazines.

New Media is a 21st Century catchall term used to define all that is related to the Internet and the interplay between technology, images and sound. In fact, the definition of new media changes daily, and will continue to do so. New media evolves and morphs continuously. What it will be tomorrow is virtually unpredictable for most of us, but we do know that it will continue to evolve in fast and furious ways.  

Digital technologies underpin innovation and competitiveness across private and public sectors and enable scientific progress in all disciplines. ICT and Digital Media are now integrated into almost every technology, industry and job. New media are forms of media that are native to computers, computational and relying on computers for distribution. Currently, some examples of new media are websites, mobile apps, virtual worlds, multimedia, computer games, human-computer (brain) interface, computer animation and interactive computer installations.  

Our world has become smaller thanks to the digital age, whether spoken or written word. In the near-future, typing messages will go away in favor of verbal and auditory communications, much like what’s already been introduced in Apple’s Siri and Amazon’s Alexa.


- The Evolution of the New Economy: The New Digital Economy 

The world is changing fast. Technology has permeated every aspect of modern life, both personal and professional. We are at the dawn of the Fourth Industrial Revolution, which will bring together digital, biological and physical technologies in new and powerful combinations. Rapid developments in technology and science are changing the way we live, work and do business. These changes come with challenges for our industries, work places and communities. Digital technologies have immense potential to drive competition, innovation and productivity.

Innovation in the business world is accelerating exponentially, with new, disruptive technologies and trends emerging that are fundamentally changing how businesses and the global economy operate. For example, emerging technologies are disrupting the communications industry by enabling new business models and bringing new customer experiences. The digital ecosystem encourages cross industry collaboration and facilitates bundled digital product offerings in a hyper-connected world accelerating the digital journey of telecom companies. Today, most business professionals spend almost every waking second of their day either interacting with a computer or carrying one on their person - and many people even sleep tethered to a computer via wearable devices that track activity and sleep patterns.  

Today, more than 5 billion consumers interact with data every day - by 2025, that number will be 6 billion, or 75% of the world's population. In 2025, each connected person will have at least one data interaction every 18 seconds. Many of these interactions are because of the billions of IoT devices connected across the globe, which are expected to create over 90ZB of data in 2025.

Today’s new digital economy is forcing organizations to think faster, more openly and more flexibly. By understanding emerging digital ecosystems, businesses can implement strategies that differentiate the customer experience and drive competitive advantage. The strategies for succeeding in the knowledge economy are predicated on the notions that knowledge and information are costly to generate and can be protected. It makes sense to build your enterprise’s competitive differentiation around its knowledge capital only if that information is unique to the firm. 

 

- What is the "New" Digital Economy (NDE)?

The New Digital Economy (NDE) is emerging from a combination of technologies, mainly from the ICT (Information and Communications Technology) space, that are becoming pervasive across mechanical systems, communications, infrastructure, and the built environment, and thus playing an increasingly important role, not only in social and political life, but in research, manufacturing, services, transportation, and even agriculture.

The technologies underpinning the NDE, most importantly, include: advanced robotics and factory automation (sometimes referred to as advanced manufacturing); new sources of data from mobile and ubiquitous Internet connectivity (sometimes referred to as the Internet of Things); cloud computing; big data analytics and artificial intelligence (AI). 

"The main driver of the NDE is the continued exponential improvement in the cost-performance of information and communications technology (ICT), mainly microelectronics, following Moore’s Law. This is not new. The digitization of design, advanced manufacturing, robotics, communications, and distributed computer networking (e.g. the Internet) have been altering innovation processes, the content of tasks, and the possibilities for the relocation of work for decades. However, three features of the NDE are relatively novel. First, new sources of data, from smart phones to factory sensors, are sending vast quantities of data into the “cloud,” where they can be analysed to generate new insights, products, and services. Second, new business models based on technology and product platforms - platform innovation, platform ownership, and platform complimenting - are significantly altering the organization of industries and the terms of competition in a range of leading-edge industries and product categories. Third, the performance of ICT hardware and software has advanced to the point where artificial intelligence and machine learning applications are proliferating. What these novel features share is reliance on very advanced and nearly ubiquitous ICT, embedded in a growing platform ecosystem characterized by high levels of interoperability and modularity." - [United Nations, UNCTAD]

The rise of new digital industrial technology, known as Industry 4.0, is a transformation that makes it possible to gather and analyze data across machines, enabling faster, more flexible, and more efficient processes to produce higher-quality goods at reduced costs. This manufacturing revolution will increase productivity, shift economics, foster industrial growth, and modify the profile of the workforce—ultimately changing the competitiveness of companies and regions. 

  

Eiffel_Tower_Paris_082318A
(Eiffel Tower, Paris, France - Ching-Fuh Lin)

2. Technologies for Digitizing and Transforming Industry and Services


- The Age of New Materials 

Throughout history, materials and advances in material technology have influenced humankind. Now, we just might be on the verge of the next shift in this type of technology, enabling products and functions we never believed possible. 

Demands from industry are requiring that materials be lighter, tougher, thinner, denser, and more flexible or rigid, as well as to be heat- and wear-resistant. At the same time, researchers are pushing the boundaries of what we imagine is possible, seeking to improve and enhance existing materials, and at the same time, come up with completely new materials that, while years away from day-to-day use, take us down entirely new, technological pathways.  


- Future Compute - Beyond Moore's Law and The Future of Microelectronics 

Nearly 60 years old and Moore's Law still stands strong for many in the computing world. But the surge of Artificial Intelligence and machine learning have coincided with the breakdown of Moore’s Law, and for many thought-leaders what lies beyond is not entirely clear.  

The future of technology is uncertain as Moore’s Law comes to an end. However, while most experts agree that silicon transistors will stop shrinking around 2021, this doesn’t mean Moore’s law is dead in spirit - even though, technically, it might be. Chip makers have to find another way to increase power. For example, there are Germanium and other III-V technologies - and, at some point, carbon nanotubes - that provide new ways of increasing power. There is also gate-all-around transistor design, extreme-ultraviolet and self-directed assembly techniques, and so on. 

Progress in technologies such as photonics, micro- and nanoelectronics, smart systems and robotics is changing the way we design, produce, commercialize and generate value from products and related services. 


- Cyber-Physical Systems (CPS) 

As digital computing and communication become faster, cheaper, and available in packages that are smaller and use less power, these capabilities are increasingly embedded in many objects and structures in the physical environment. Cyber-physical systems (CPS) are physical and engineered systems whose operations are monitored, coordinated, controlled, and integrated by computing and communication. Broad CPS deployment is transforming how we interact with the physical world as profoundly as the world wide web transformed how we interact with one another, and further harnessing their capabilities holds the possibility of enormous societal and economic impact. 

Cyber-Physical Systems (CPS) are usually composed of a set of networked agents, including sensors, actuators, control processing units, and communication devices. While some forms of CPS are already in use, the widespread growth of wireless embedded sensors and actuators is creating several new applications in areas such as medical devices, autonomous vehicles, and smart infrastructure, and is increasing the role that the information infrastructure plays in existing control systems such as in the process control industry or the power grid. 

Many CPS applications are safety-critical: their failure can cause irreparable harm to the physical system under control, and to the people who depend, use or operate it. In particular, critical cyber-physical infrastructures such as electric power generation, transmission and distribution grids, oil and natural gas systems, water and waste-water treatment plants, and transportation networks play a fundamental and large-scale role in our society. Their disruption can have a significant impact on individuals, and nations at large. Securing these CPS infrastructures is, therefore, vitally important. 

Similarly because many CPS systems collect sensor data non-intrusively, users of these systems are often unaware of their exposure. Therefore, in addition to security, CPS systems must be designed with privacy considerations. 

Advances in CPS will enable capability, adaptability, scalability, resiliency, safety, security, and usability that will far exceed the simple embedded systems of today. CPS technology will transform the way people interact with engineered systems - just as the Internet has transformed the way people interact with information. New smart CPS will drive innovation and competition in sectors such as agriculture, energy, transportation, building design and automation, healthcare, and manufacturing. Moreover, the integration of artificial intelligence with CPS creates new research opportunities with major societal implications. 

CPS has provided an outstanding foundation to build advanced industrial systems and applications by integrating innovative functionalities through Internet of Things (IoT) and Web of Things (WoB) to enable connection of the operations of the physical reality with computing and communication infrastructures. A wide range of industrial CPS-based applications have been developed and deployed in Industry 4.0. 

Today's world is a network of interconnected, embedded computer systems with components ranging in size and complexity. Researchers and hackers have shown that networked embedded systems are vulnerable to remote attack. Technology for the construction of safe and secure cyber-physical systems is badly needed. 


- Flexible and Wearable Electronics 

Flexible and wearable electronics combines new and traditional materials with large area processes to fabricate lightweight, flexible, printed and multi-functional electronic products. 

[Stanford E-Wear]: "Wearable electronics has emerged as a new form of electronics that combines sensors and wireless communication to allow monitoring of vital information autonomously. Unlike typical sensor networks, wearable electronics need to form conformal and intimate contact with objects to be monitored. Furthermore, they have to be comfortable to wear while providing accurate information." 

Wearable electronics are smart electronic devices that can be connected to the Internet and be worn on the body as accessories. These devices are a key segment of loT devices, and they can exchange data through Internet with the user and other connected devices. Applications for wearable electronics range from health monitoring, disease detection, robotics, robotics surgery, implantable electronics, driverless cars, structural monitoring, virtual reality, augmented reality, etc.. 

Wearable devices offer benefits like optimized decision-making, ease of handling emergencies, cost cutting, enhanced quality of living, remote control access, healthy lifestyle, time management, commercial benefit, and better safety.  

Along with the explosion of interest in wearable electronics in recent years, numerous challenges nonetheless remain before wearable electronics become a truly commercializable technology. One major challenge is the highly interdisciplinary nature of the field, which mandates the convergence of many disciplines, notably from materials, devices, system integration, software and application verification. The impact is far beyond health care. It will improve everything from the environment to defense, the economy, and energy production.  


- Photonics -  An Enabling Technology 

Photonics is the technology of generating and harnessing light and other forms of radiant energy whose quantum unit is the photon. Photonics involves cutting-edge uses of lasers, optics, fiber-optics, and electro-optical devices in numerous and diverse fields of technology - alternate energy, manufacturing, health care, telecommunication, environmental monitoring, homeland security, aerospace, solid state lighting, and many others. 

Lasers and other light beams are the “preferred carriers” of energy and information for many applications. For example: Lasers are used for welding, drilling, and cutting of metals, fabrics, human tissue, and other materials; Coherent light beams (lasers) have a high bandwidth and can carry far more information than radio frequency and microwave signals; Fiber optics allow light to be “piped” through cables. 

Research in photonics ranges in scope from fundamentally new tools, such as small-footprint, high-throughput multiphoton microscopes, through exceptionally high-power semiconductor lasers, to components and systems for next-generation optical networks for both the Internet and data centers, and into consumer equipment like 3-D displays. New areas are constantly explored by research teams worldwide, as photonics becomes more pervasive in our lives. Communications, displays, medicine, manufacturing and imaging are just a few applications. 


- Unconventional Nanoelectronics 

Shrinking transistors have powered 50 years of advances in computing - but now other ways must be found to make computers more capable. Mobile apps, video games, spreadsheets, and accurate weather forecasts: that’s just a sampling of the life-changing things made possible by the reliable, exponential growth in the power of computer chips over the past five decades. The continual cramming of more silicon transistors onto chips has been the feedstock of exuberant innovation in computing. That could stymie future advances in electronics, unless new architectures and designs can allow progress in chip performance to continue. There are also worries about the rising cost of designing integrated circuits. Future generations of electronics will be based on new devices and circuit architectures, operating on physical principles that cannot be exploited by conventional transistors. Research scientists worldwide seek the next device that will propel computing beyond the limitations of current technology. 

[Nanoelectronics for 2020 and Beyond]: "The semiconductor industry is a major driver of the modern U.S. economy and has accounted for a large portion of the productivity gains that have characterized the global economy since the 1990s. Recent advances in this area have been fueled by what is known as Moore’s Law scaling, which has successfully predicted the exponential increase in the performance of computing devices for the last 40 years. This gain has been achieved due to ever-increasing miniaturization of semiconductor processing and memory devices (smaller and faster switches and transistors). Continuing to shrink the dimensions of electronic devices is important in order to further increase processor speed, reduce device switching energy, increase system functionality, and reduce manufacturing cost per bit. However, as the dimensions of critical elements of devices approach atomic size, quantum tunneling and other quantum effects degrade and ultimately prohibit the operations of conventional devices. Researchers are therefore pursuing more radical approaches to overcome these fundamental physics limitations.

Candidate approaches include different types of logic using cellular automata or quantum entanglement and superposition; 3D spatial architectures; and information-carrying variables other than electron charge, such as photon polarization, electron spin, and position and states of atoms and molecules. Approaches based on nanoscale science, engineering, and technology are most promising for realizing these radical changes and are expected to change the very nature of electronics and the essence of how electronic devices are manufactured. Rapidly reinforcing domestic R&D successes in these arenas could establish a U.S. domestic manufacturing base that will dominate 21st-century electronics commerce. The goal of this initiative is to accelerate the discovery and use of novel nanoscale fabrication processes and innovative concepts to produce revolutionary materials, devices, systems, and architectures to advance the field of nanoelectronics."  


- Electronic Smart Systems (ESS

The technology area Electronic Smart Systems (ESS) focuses on the challenges that the ongoing digitization of society introduces by the deep penetration of embedded sensing, acting and communicating electronics in our environment. Things become smart and connected, sensor systems and smart things provide the sensing and interacting edges that are bringing the entire world online. Embedded electronics become more pervasive and provide an opportunity for a disruptive wave of innovation of our daily living. 

Smart systems incorporate functions of sensing, actuation, and control in order to describe and analyze a situation, and make decisions based on the available data in a predictive or adaptive manner, thereby performing smart actions. In most cases the “smartness” of the system can be attributed to autonomous operation based on closed loop control, energy efficiency, and networking capabilities. A lot of smart systems evolved from microsystems. They combine technologies and components from microsystems technology (miniaturized electric, mechanical, optical, and fluidic devices) with other disciplines like biology, chemistry, nanoscience, or cognitive sciences.  

Electronic smart systems identify a broad class of intelligent and miniaturized devices that are usually energy-autonomous and ubiquitously connected. In order to support these functions like sensing, actuation, and control, electronic smart systems must include sophisticated and heterogeneous components and subsystems, such as digital signal processing devices, analog devices for RF and wireless communication, discrete elements, application-specific sensors and actuators, energy sources, and energy storage devices. These systems take advantage of the progress achieved in miniaturization of electronic systems, and are highly energy-efficient and increasingly often energy-autonomous, and can communicate with their environment. 

Thanks to their heterogeneous nature, smart embedded and cyber-physical applications are able to deliver a wide range of services, and their application may lead to provide solutions to address the grand social, economic, and environmental challenges such as environmental and pollution control, energy efficiency at various scales, aging populations and demographic change, risk of industrial decline, security from micro- to macro-level, safety in transportation, increased needs for the mobility of people and goods, health and lifestyle improvements, just to name the most relevant. 

The goals is to develop and validate a new generation of cost-effective ESS technologies integrating hardware technologies across multiple fields. This massive integration of electronics everywhere introduces challenges like: integration, miniaturization, building practice, new sensors, low energy consumption, electromagnetic interference (EMI), architectures for high performance computing, resource efficient communication and affordable components.  


- Security and Resilience for Collaborative Manufacturing Environments 

The widespread adoption by manufacturing industry around the world of ICT is now paving the way for disruptive approaches to development, production and the entire logistics chain (i.e., Industry 4.0 - digitization of industrial manufacturing). This is increasingly blurring the boundaries between the real world and the virtual world in what are known as cyber-physical production systems (CPPSs). At the same time a new operational risk for connected, smart manufacturers and digital supply networks appears, and this is cyber. The interconnected nature of Industry 4.0-driven operations and the pace of digital transformation mean that cyberattacks can have far more extensive effects than ever before. 

The technological developments which are at the base of Industry 4.0 do raise at the same time a vast number of associated of security concerns. Unfortunately, intruders will not stop trying to find new ways of breaking into business networks. Attacks specifically designed to penetrate industrial control systems present a threat to production facilities. Infected computers can be controlled remotely and their data stolen. As the malware exploits unknown security holes, firewalls and network monitoring software are unable to detect it. 

Cyber risks in the age of Industry 4.0 extend beyond the supply network and manufacturing, however, to the product itself. As products are increasingly connected – both to each other and, at times, even back to the manufacturer and supply network – cyber risk no longer ends once a product has been sold. Connected objects also have a risk level, because IoT devices often present significant cyber risks. IoT devices that perform some of the most critical and sensitive tasks in industry are often the most vulnerable devices found on a network. Therefore, an integrated approach to protecting devices must be taken. 

The nature of cyber risks in Industry 4.0 thus is largely dependent on the particular industrial portfolio and therefore requires adequate action from the concerned industrial decision making factors. However, given the fact that industrial production is governed by a number of regulations industrial cyber risks should also be a concern for regulators.  


- Artificial Intelligence (AI), Robotics, and Application Areas 

Artificial Intelligence (AI) is advancing at breakneck speed. Technical advances are making it possible for non-experts to apply AI in their work, accelerating the pace at which new AI solutions are deployed. The definition of AI is constantly evolving, and the term often gets mangled. 

What is AI, exactly? The question may seem basic, but the answer is kind of complicated. In the broadest sense, AI refers to machines that can learn, reason, and act for themselves. They can make their own decisions when faced with new situations, in the same way that humans and animals can. As it currently stands, the vast majority of the AI advancements and applications you hear about refer to a category of algorithms known as machine learning. 

The pace of automation that this technology is fueling will reach every corner of the global economy. While robots originated in large-scale mass manufacturing, they are now spreading to more and more application areas. In these new settings, robots are often faced with new technical and non-technical challenges. Through interdisciplinary research across technological and sector-specific fields, research scientists drive innovation and new discoveries across the robotics spectrum - from large-scale automation and autonomous vehicles to personalized robotic learning and engagement applications or systems. Intelligence is moving towards edge devices. Increased computing power and sensor data along with improved AI algorithms are driving the trend towards machine learning be run on the end device, such as smartphones or automobiles, rather than in the cloud. For example, 

Robotic process automation, alongside blockchain, AI, cognitive computing and the Internet of Things (IoT), is one of the new and emerging technologies expected to profoundly impact and transform the workforce of the future across the financial services sector. Robotic Process Automation (RPA) is quickly becoming the go-to solution for financial institutions that want to improve digital speed to market and cost take outs.

  • Chatbots. These artificial intelligence (AI) programs simulate interactive human conversation using key pre-calculated user phrases and auditory or text-based signals. Chatbots have recently started to use self-created sentences in lieu of pre-calculated user phrases, providing better results. Chatbots are frequently used for basic customer service on social networking hubs and are often included in operating systems as intelligent virtual assistants. 
  • AI and robotics are transforming healthcare. AI is getting increasingly sophisticated at doing what humans do, but more efficiently, more quickly and at a lower cost. The potential for both AI and robotics in healthcare is vast. Just like in our every-day lives, AI and robotics are increasingly a part of our healthcare eco-system.
  • The food industry is being revolutionized by robotics and automation. There are real problems in modern agriculture. Traditional farming methods struggle to keep up with the efficiencies required by the market. Farmers in developed countries are suffering from a lack of workforce. The rise of automated farming is an attempt to solve these problems by using robotics and advanced sensing. Following 5 ways robotics is changing the food industry: Nursery Automation, Autonomous Precision Seeding, Crop Monitoring and Analysis, Fertilizing and Irrigation, Crop Weeding and Spraying. 
Toronto_Canada
(Toronto, Canada - Wei-Jiun Su)


3. Data Infrastructure: HPC, Big Data and Cloud Technologies

 

- HPC, Big Data and Cloud Computing: the way forward to the future of mankind  

Progress for both science and mankind is going to depend more and more on “supercomputer brains” that can process large amounts of data in real time, providing them with a meaning and - subsequently - turning it into actionable knowledge. 

The Internet of Things and the convergence of HPC, big data and cloud computing technologies are enabling the emergence of a wide range of innovations. Building industrial large-scale application test-beds that integrate such technologies and that make best use of currently available HPC and data infrastructures will accelerate the pace of digitization and the innovation potential in key industry sectors (for example, healthcare, manufacturing, energy, finance & insurance, agri-food, space and security).  


- High Performance and Super Computing 

In the Age of Internet Computing, billions of people use the Internet every day. As a result, supercomputer sites and large data centers must provide high-performance computing services to huge numbers of Internet users concurrently. We have to upgrade data centers using fast servers, storage systems, and high-bandwidth networks. The purpose is to advance network-based computing and web services with the emerging new technologies. 

The general computing trend is to leverage shared web resources and massive amounts of data over the Internet. The evolutionary trend towards parallel, distributed, and cloud computing with clusters, MPPS (Massively Parallel Processing), P2P (Peer-to-Peer) networks, grids, clouds, web services, and the Internet of Things. 

"Supercomputer" is a general term for computing systems capable of sustaining high-performance computing applications that require a large number of processors, shared or distributed memory, and multiple disks. Supercomputers are primarily are designed to be used in enterprises and organizations that require massive computing power. A supercomputer incorporates architectural and operational principles from parallel and grid processing, where a process is simultaneously executed on thousands of processors or is distributed among them. 

Performance of a supercomputer is measured in floating-point operations per second (FLOPS) instead of million instructions per second (MIPS). As of today, there are supercomputers which can perform up to nearly a hundred quadrillions of FLOPS, measured in P(eta)FLOPS. As of today, all of the world's fastest 500 supercomputers run Linux-based operating systems.  


- Turning Big Data into Smart Data 

Big data refers to extremely large datasets that are difficult to analyze with traditional tools. It is often boiled down to a few varieties of data generated by machines, people, and organizations. Big data is being generated by everything around us at all times. Every digital process and social media exchange produces it. Systems, sensors and mobile devices transmit it. Big data can be either structured, semi-structured, or unstructured. IDC estimates that 90 percent of big data is unstructured data. 

Big data is arriving from multiple sources at an alarming velocity, volume and variety. To extract meaningful value from big data, you need optimal processing power, analytics capabilities and skills. In most business use cases, any single source of data on its own is not useful. Real value often comes from combining these streams of big data sources with each other and analyzing them to generate new insights. 

Analyzing large data sets, so-called big data, will become a key basis of competition, underpinning new waves of productivity growth, innovation, and consumer surplus. Big data must pass through a series of steps before it generates value. Namely data access, storage, cleaning, and analysis.  

 

- Future Cloud and Edge Computing

Cloud computing is the delivery of computing services—servers, storage, databases, networking, software, analytics, and more - over the Internet (“the cloud”). Companies offering these computing services are called cloud providers and typically charge for cloud computing services based on usage, similar to how you’re billed for water or electricity at home. 

Most cloud computing services fall into three broad categories: infrastructure as a service (IaaS), platform as a service (PaaS), and software as a service (Saas). These are sometimes called the cloud computing stack, because they build on top of one another. There are three different ways to deploy cloud computing resources: public cloud, private cloud, and hybrid cloud. Knowing what they are and how they’re different makes it easier to accomplish your business goals. 

Cloud computing provides a simple way to access servers, storage, databases and a broad set of application services over the Internet. A Cloud services platform such as Amazon Web Services owns and maintains the network-connected hardware required for these application services, while you provision and use what you need via a web application.  

 

- A Health Data Revolution

The very beginning of the bio (big) data revolution is already upon us with the emergence of wearable, constantly connected tech that collects information (data) about our health. There’s an overall belief that this could be a great thing for a society, the ability to actually have reams of data that can be applied to create better health care practices. We'll have a lot more information about how people really eat, exercise, and conduct their daily lives, which will allow doctors and researchers to better tailor programs to serve our needs and help us become healthier.


4. Artificial Intelligence, Machine Learning, and Neural Networks

 

- Artificial Intelligence (AI)

Artificial Intelligence (AI) is the broader concept of machines being able to carry out tasks in a way that we would consider “smart”. Machine Learning (ML) is a current application of AI based around the idea that we should really just be able to give machines access to data and let them learn for themselves.

Over the past few years AI has exploded, and especially since 2015. Much of that has to do with the wide availability of GPUs that make parallel processing ever faster, cheaper, and more powerful. It also has to do with the simultaneous one-two punch of practically infinite storage and a flood of data of every stripe (that whole Big Data movement) - images, text, transactions, mapping data, you name it. The hype around AI becomes tangible.

The most important thing to understand about AI is that it is not a static formula to solve. It’s a constantly evolving system designed to identify, sort, and present the data that is most likely to meet the needs of users at that specific time, based on a multitude of variables that go far beyond just a simple keyword phrase. 

AI is trained by using known data, such as: content, links, user behavior, trust, citations, patterns, and then analyzing that data using user experience, big data, and machine learning to develop new ranking factors capable of producing the results most likely to meet user needs.

The goal of Artificial Intelligence (AI) is to understand intelligence by constructing computational models of intelligent behavior. This entails developing and testing falsifiable algorithmic theories of (aspects of) intelligent behavior, including sensing, representation, reasoning, learning, decision-making, communication, coordination, action, and interaction. AI is also concerned with the engineering of systems that exhibit intelligence. AI fuels analytics, which fuels actionable intelligence, which fuels business growth.


- Machine Learning

Machine learning is widely used in many modern AI applications. Machine-learning algorithms use statistics to find patterns in massive amounts of data. And data encompasses a lot of things - numbers, words, images, clicks, what have you. If it can be digitally stored, it can be fed into a machine-learning algorithm.

Machine learning is the process that powers many of the services we use today - recommendation systems like Netflix; search engines like Google; social-media feeds like Facebook; voice assistants like Siri; etc.. In all of these instances, each platform is collecting as much data about you as possible\ - what genres you like watching, what links you are clicking, which statuses you are reacting to - and using machine learning to make a highly educated guess about what you might want next. Or, in the case of a voice assistant, about which words match best with the funny sounds coming out of your mouth. Frankly, this process is quite basic: find the pattern, apply the pattern. But it pretty much runs the world. 


- Deep Learning

Deep learning is machine learning on steroids: it uses a technique that gives machines an enhanced ability to find - and amplify - even the smallest patterns. This technique is called a deep neural network - deep because it has many, many layers of simple computational nodes that work together to munch through data and deliver a final result in the form of the prediction. Machine (and deep) learning comes in three flavors: supervised, unsupervised, and reinforcement. 

Deep learning accelerators such as GPUs, FPGAs, and more recently TPUs. More companies have been announcing plans to design their own accelerators, which are widely used in data centers. There is also an opportunity to deploy them at the edge, initially for inference and for limited training over time. This also includes accelerators for very low power devices. The development of these technologies will allow machine learning (or smart devices) to be used in many IoT devices and appliances.


- Neural Networks

Neural networks were vaguely inspired by the inner workings of the human brain. The nodes are sort of like neurons, and the network is sort of like the brain itself. 

Neural networks are a set of algorithms, modeled loosely after the human brain, that are designed to recognize patterns. They interpret sensory data through a kind of machine perception, labeling or clustering raw input. The patterns they recognize are numerical, contained in vectors, into which all real-world data, be it images, sound, text or time series, must be translated. A Neural Network is a computer system designed to work by classifying information in the same way a human brain does. It can be taught to recognize, for example, images, and classify them according to elements they contain. The development of neural network has been key to teaching computers to think and understand the world in the way we do, while retaining the innate advantages they hold over us such as speed, accuracy and lack of bias. 

Kerry_Park_Seattle_WA_012115
(Kerry Park, Seattle, U.S.A. - Jeffrey M. Wang)


5. 5G and Beyond Mobile Wireless Technology


- 5G - A Whole New Revolution of Telecom Network Infrastructure

Mobile is the largest technology platform in human history. The incredible demand for wireless data bandwidth shows no sign of slowing down in the foreseeable future. At the same time, the mobile data experience for users continues to expand and develop, putting an increasing strain on network use of available wireless spectrum. 5G is about more than fast data rates and greater capacity. It's about the seamless, real-time interaction between humans and billions of intelligent devices. The wireless 5G technology is going to make the new ultra-fast, reliable, and ubiquitous networks possible. This high degree of ubiquity and speeds will boost the explosion of new connected IoT (Internet of Things) devices. 

4G turned mobile phones into movie-streaming platforms, but 5G promises more than speedy downloads. One day, intelligence will be integrated into infrastructure. Cars will drive themselves, houses will be self-cleaning, agriculture will work sustainably, electrical grids will respond automatically to fluxes in energy demands. It could pave the way for surgeons operating remotely on patients, and events that can be vividly experienced from thousands of miles away. Simply put, we’ll live in a smarter, more connected world.

A big game-changer for 5G is the parallel emergence of virtualization trends such as SDN, NFV, Distributed Cloud, and Network Slicing. With these powerful leverage, the entire wireless network infrastructure will need to change from the core to the edge and from proprietary hardware/software components to virtual network functions. 


- The 5G mmWave Spectrum

The high-frequency bands in the spectrum above 24 GHz were targeted as having the potential to support large bandwidths and high data rates, ideal for increasing the capacity of wireless networks. These high-frequency bands are often referred to as “mmWave” due to the short wavelengths that can be measured in millimeters. Although the mmWave bands extend all the way up 300 GHz, it is the bands from 24 GHz up to 100 GHz that are expected to be used for 5G. The mmWave bands up to 100 GHz are capable of supporting bandwidths up to 2 GHz, without the need to aggregate bands together for higher data throughput.

Underlying the basic 5G mmWave technology is a new air interface based on time-division duplexing and robust orthogonal frequency division multiplexing (OFDM) methods similar to those as used in LTE and Wi-Fi networks. With peak throughput speeds of 10 Gbps or more and the ability to support a huge number of devices, 5G mmWave has performance targets that will deliver a transformation in how wireless communications are utilized.


- 5G Standards Are Not Yet Finalized

5G has set a new standard for wireless, opening up the spectrum above 6 GHz that has been previously unusable by cellular services. The new mobile network technology has already begun using the current architecture of LTE to support non-standalone 5G, on the way to full standalone infrastructure that does not rely on 4G. The radio access technology (RAT) developed for 5G by 3GPP includes two frequency ranges: FR1, which operates below 6 GHz, and FR2, which includes bands above 24 GHz and into the extremely high frequency range above 50 GHz. 3GPP has dubbed 5G’s new air interface 5G NR (New Radio). Like LTE (long term evolution), the term describes a group of technologies that enable a range of speeds and capacities. The first 5G NR specifications were part of 3GPP’s RAN Evolution of LTE documented in Release 14, begun in June 2016.

5G standards are not yet finalised and the most advanced services are still in the pre-commercial phase. 5G needs spectrum within three key frequency ranges to deliver widespread coverage and support all use cases. The three ranges are: Sub-1 GHz, 1-6 GHz and above 6 GHz. - Above 6 GHz is needed to meet the ultra-high broadband speeds envisioned for 5G. Players (AT&T, Verizon, ..) in the (U.S.) national wireless industry are developing their 5G networks and are working to acquire spectrum. AT&T is gearing up to launch the first standards-based 5G services in multiple U.S. markets by the end of 2018.

5G will achieve speeds of 20 gigabits per second, fast enough to download an entire Hollywood movie in a few seconds. It also will reduce latency - the measure of how long it takes a packet of data to be transmitted between two points - by a factor of 15. 5G networks will combine numerous wireless technologies, such as 4G LTE, Wi-Fi, and millimeter wave technology. 5G will also leverages cloud infrastructure, intelligent edge services and virtualized network core. 


- Network Slicing - a Key Technology for 5G 

With network slicing, a single 5G network can be sectioned into multiple virtual networks that allow specific, optimized support for distinct use cases. The networks developed in network slicing are formed on demand and can perform different functions across verticals, enterprises and application types. What this means is that whereas existing networks serve all functions and customers - for example, utilities, manufacturing and automotive - as one, network slicing allows operators to select smarter customizations based on throughput, latency, data speeds and more to fit the needs of each customer and make the network as a whole more efficient. 

5G network slicing will allow telecommunication operators to split a single physical network into multiple virtual networks. With network slicing technology, a single physical network can be partitioned into multiple virtual networks allowing the operator to offer optimal support for different types of services for different types of customer segments. Network slicing can support customized connectivity designed to benefit many industries by offering a smarter way to segment the network to support particular services or business segments. With this technology, slices can be optimized by myriad characteristics including latency or bandwidth requirements.  

The key benefit of network slicing technology is it enables operators to provide networks on an as-a-service basis, which enhances operational efficiency while reducing time-to-market for new services. For example. network slicing technology can provide connectivity for smart meters with a network slice that connects “Internet of Things (IoT)” devices with a high availability and high reliability data-only service, with a given latency, data rate and security level. At the same time, the technology can provide another network slice with very high throughput, high data speeds and low latency for an augmented reality (AR) service.  Each use case receives a unique set of optimized resources and network topology - covering certain SLA-specified factors such as connectivity, speed, and capacity - that suit the needs of that application. 

The new era of 5G connectivity will be characterized by its wide diversity of use cases and their varied requirements in terms of power, bandwidth, and speed. The greater elasticity brought about by network slicing will help to address the cost, efficiency, and flexibility requirements imposed by future.  


- 5G Will Drive Edge Intelligence 

With the 5G network infrastructure creating a completely new layer of “fog,” 5G will allow companies to feel more secure within their own private networks. In the same way that cloud virtualization transformed existing business systems over the past decade, the combination of network performance and edge compute capabilities will result in an “edge virtualization” that will change the way IoT operates.

5G and edge compute are tightly intertwined. Edge compute is a system function that provides the ability to perform digital computation in devices that exist at the edge of the IoT stack. Three key attributes will drive the transformation in edge intelligence: ultra-dense network node deployment, increased bandwidth from the higher frequency spectrum. and private instances enabling interoperability and choice between public and private networks. As the 5G network must deploy many network nodes, as many as one every 50m2, massive new amounts of edge compute are going to emerge near the edge of IoT.  


6. The Next Generation Internet (NGI) and Quantum Computing


- IPV6 - The Next Generation Internet 

Internet Protocol version 6 (IPv6) is the latest revision of the Internet Protocol (IP), the communications protocol that provides an identification and location system for computers on networks and routes traffic across the Internet. Every device on the Internet must be assigned an IP address in order to communicate with other devices. IPv6 contains addressing and control information to route packets for the Next Generation Internet (NGI). 

IPv6 addresses the main problem of IPv4, that is, the exhaustion of addresses to connect computers or host in a packet-switched network. IPv6 has a very large address space and consists of 128 bits as compared to 32 bits in IPv4. Therefore, it is now possible to support  2128 unique IP addresses, a substantial increase in number of computers that can be addressed with the help of IPv6 addressing scheme. In addition, this addressing scheme will also eliminate the need of NAT (network address translation) that causes several networking problems (such as hiding multiple hosts behind pool of IP addresses) in end-to-end nature of the Internet. 

IPv6 addresses include a scope field that identifies the type of application suitable for the address. IPv6 does not support broadcast addresses, but instead uses multicast addresses for broadcast. In addition, IPv6 defines a new type of address called anycast. The IPv6 protocol can handle packets more efficiently, improve performance and increase security. It enables internet service providers to reduce the size of their routing tables by making them more hierarchical. 

IPv6 builds upon the functionality and structure of IPv4 in the following ways: Provides a simplified and enhanced packet header to allow for more efficient routing; Improves support for mobile phones and other mobile computing devices; Enforces increased, mandatory data security through IPsec (which was originally designed for it); Provides more extensive quality-of-service (QoS) support. 

IPV6 brings quality of service (QoS) that is required for several new applications such as IP telephony, video/audio, interactive games or ecommerce. Whereas IPv4 is a best effort service, IPv6 ensures QoS, a set of service requirements to deliver performance guarantee while transporting traffic over the network. For networking traffic, the quality refers to data loss, latency (jitter) or bandwidth. In order to implement QoS marking, IPv6 provides a traffic-class field (8 bits) in the IPv6 header. It also has a 20-bit flow label. This feature ensures transport layer connection survivability and allows a computer or a host to remain reachable regardless of its location in an IPv6 network and, in effect, ensures transport layer connection survivability. With the help of Mobile IPv6, even though the mobile node changes locations and addresses, the existing connections through which the mobile node is communicating are maintained. To accomplish this, connections to mobile nodes are made with a specific address that is always assigned to the mobile node, and through which the mobile node is always reachable. 
Other important features of IPv6: Stateless Auto-reconfiguration of Hosts - This feature allows IPv6 host to configure automatically when connected to a routed IPv6 network; Network-layer security - Pv6 implements network-layer encryption and authentication via IPsec. 

Considering all these advantages of IPv6, it seems like the industry is taking a long time to migrate from IPv4 to IPv6. Part of the reason is that network address translation (NAT) helped delay the transition. NAT makes it possible to direct traffic to thousands and thousands of individual IP addresses on private networks through NAT gateways that each use up just one public IP address. 

Most of the world “ran out” of new IPv4 addresses between 2011 and 2018 – but we won’t completely be out of them as IPv4 addresses get sold and re-used, and any leftover addresses will be used for IPv6 transitions. There’s no official switch-off date, so people shouldn’t be worried that their internet access will suddenly go away one day. As more networks transition, more content sites support IPv6 and more end users upgrade their equipment for IPv6 capabilities, the world will slowly move away from IPv4.  


- Quantum Communication Network 

Quantum computers are on the cusp of commercialization. What is quantum computing, and what will it be capable of? 

[World Economic Forum]: As China moves closer to building a working quantum communications network, the possibility of a quantum Internet becomes more and more real. In the simplest of terms, a quantum Internet would be one that uses quantum signals instead of radio waves to send information. The Internet as we know it uses radio frequencies to connect various computers through a global web in which electronic signals are sent back and forth. In a quantum internet, signals would be sent through a quantum network using entangled quantum particles. 

Researchers have recently made significant progress in building this quantum communication network. China launched the world’s first quantum communication satellite in 2016, and they’ve since been busy testing and extending the limitations of sending entangled photons from space to ground stations on Earth and then back again. They’ve also managed to store information using quantum memory. By the end of August, 2017, the nation plans to have a working quantum communication network to boost the Beijing-Shanghai internet. Leading these efforts is Jian-Wei Pan of the University of Science and Technology of China, and he expects that a global quantum network could exist by 2030. That means a quantum internet is just 13 years away, if all goes well. 


- What is a Quantum Computers? 

[BBC]: "A quantum computer is a machine that is able to crack very tough computation problems with incredible speed - beyond that of today's "classical" computers. In conventional computers, the unit of information is called a "bit" and can have a value of either 1 or 0. But its equivalent in a quantum system - the qubit (quantum bit) - can be both 1 and 0 at the same time. This phenomenon opens the door for multiple calculations to be performed simultaneously. However, qubits need to be synchronised using a quantum effect known as entanglement, which Albert Einstein termed "spooky action at a distance". There are four types of quantum computers currently being developed, which use: Light particles; Trapped ions; Superconducting qubits; Nitrogen. vacancy centres in diamonds. 

Quantum computers will enable a multitude of useful applications, such as being able to model many variations of a chemical reaction to discover new medications; developing new imaging technologies for healthcare to better detect problems in the body; or to speed up how we design batteries, new materials and flexible electronics." 


- How do you build the next-generation Internet? 

[BBC]: It's not easy to develop technology for a device that hasn't technically been invented yet, but quantum communications is an attractive field of research because the technology will enable us to send messages that are much more secure. 

There are several problems that will need to be solved in order to make a quantum Internet possible: getting quantum computers to talk to each other; making communications secure from hacking; transmitting messages over long distances without losing parts of the message; and routing messages across a quantum network.   

Hong Kong_4
(Hong Kong)


7. Digital Disruption and Building the Digital Platforms of the Future


- Digital Disruption and the Global Software Revolution

The new digital technologies have led to widespread use of cloud computing, recognition of the potential of big data analytics, artificial intelligence, and significant progress in aspects of the Internet of Things, such as home automation, smart cities and grids and digital manufacturing. 

Disruption in commerce means a radical break from the existing processes in an industry. In the digital age, disruption usually comes from new Internet-enabled business models that are shaking up established industry structures.

The growth of accessible cloud infrastructure, SaaS and open source software solutions, and mobile computing has significantly lowered barriers to innovation, distribution, and adoption of ITC. This ubiquitous access to advanced technology is seeing software, and communications technologies, becoming key differentiators in the way organisations of all sizes now compete. More and more major businesses and industries are now run on software and delivered as online services - drawing inspiration from Silicon Valley-style entrepreneurial technology companies that are rapidly disrupting established industry structures.

Businesses, government agencies and even Non-governmental Organizations (NGOs) are being forced to adopt these new operating practices, or face going out of business. Digital disruption is causing chaos and opportunity in every industry. The risks are real and dramatic. 


- Building a Next-Gen Digital Platform to Lead in the New Digital Economy

The enterprise world is changing faster than ever. To compete, it is now necessary to do business at an almost unprecedented size and scale. In order to achieve this scale, winning companies are establishing digital platforms that extend their organizational boundaries. With the Internet as the platform for innovation and the emergence of the information-fueled economy, technology is both a strategic requirement and a strategic advantage. 

While the term “digital platforms” includes anything from search engines (such as Google), to social platforms (such as Facebook), all the way to IaaS providers and PaaS providers (such as AWS and Azure), digitalized business technology is becoming increasingly refined. 

Digital platforms are virtualized, containerized, and treated like malleable, reusable resources, with workloads remaining independent from the operating environment. Systems are loosely coupled and embedded with policies, controls, and automation. Likewise, on-premises, private cloud, or public cloud capabilities can be employed dynamically to deliver any given workload at an effective price and performance point.


- Digital Platforms in Banking and Financial Services 

Advances in digital technology has expanded the awareness of the benefits of conducting financial transactions online or with mobile devices. At the same time, digital advances have provided access to financial services for billions of previously unserved and underserved consumers worldwide, especially in less developed economies. 

Based on current trends, digital platforms will become the preferred and dominant business model for banks and financial institutions in the future. Digital platforms offer consumers and small businesses the ability to connect to financial and other service providers through an online or mobile channel as an integrated part of their day-to-day activities. 

In emerging markets, billions of people around the world without access to traditional financial services, FinTech could lead to a revolution in financial inclusion and membership in the new global digital economy. Individuals and businesses have access to useful and affordable financial products and services that meet their needs – transactions, payments, savings, credit and insurance – delivered in a responsible and sustainable way. Financial inclusion is a key enabler to reducing poverty and boosting prosperity. 

With lower distribution costs and simplified engagement, the movement from paper to digital is picking up speed and increasing consumer expectations. This provides traditional financial institutions the opportunity to transform legacy delivery options, while also challenging the business case for existing physical infrastructures. 

The digitization of financial services also will improve identity management through enhanced biometrics. This will impact on the access to banking services in underserved markets and improve traditional payments and global money movement. 


- Digital Manufacturing Platforms for Connected Smart Factories and Industry 4.0 

Digital manufacturing platforms will be fundamental for the development of industry 4.0 and connected smart factories. They play an increasing role in dealing with competitive pressures and incorporating new technologies, applications and services. Advances are needed in digital manufacturing platforms that integrate different technologies, make data from the shop floor and the supply network easily accessible, and allow for complementary applications. The challenge is to fully exploit new concepts and technologies that allow manufacturing companies, especially mid-caps, small and medium-sized enterprises (SMEs), to fulfill the demands from changing supply and value networks. 

Smart manufacturing is the use of real-time data and information and communications technology to advance manufacturing intelligence, and to significantly improve productivity, performance, technology adoption, as well as addressing sustainability issues. More specifically, this will require the research, development, and transition to industry of advanced sensing and instrumentation; process monitoring, control, and optimization; advanced hardware and advanced software platforms; and real-time and predictive modeling and simulation technologies. 

Smart industry is a synonym for Industry 4.0 or industrial transformation in the fourth industrial revolution within which smart manufacturing de facto fits. Sensors and data analytics underpin smart manufacturing initiatives. Industry 4.0 is the name given to the German strategic initiative to establish Germany as a lead market and provider of advanced manufacturing solutions. It represents a paradigm shift from “centralized” to “decentralized” smart manufacturing and production. The 4th industrial revolution is powered by robotics, artificial intelligence, the Internet of Things (IoT), drone, 3D printing, Augmented Reality, and Cloud technologies, all of which will use wireless 5G technology to allow machine to machine communication. This will become the backbone of manufacturing and related services in the future. 

In this hypercompetitive world, the need for businesses to deliver more for less whilst meeting an increase in consumer expectations is ever more challenging. Logistics 4.0 and Smart Supply Chain Management are at the heart of addressing these challenges through a combination of cross-industrial methodologies. Organizations will need to become more innovative if they are to manage the complexity of today’s environment and succeed in creating more value for their stakeholders. 

Following nine pillars of technological advancement underpin Industry 4.0: Big Data and Analytics, Autonomous Robots, Simulation, Horizontal and Vertical System Integration, the Industrial Internet of Things, Cybersecurity, the Cloud, Additive Manufacturing, and Augmented Reality. Many of the nine advances in technology that form the foundation for Industry 4.0 are already used in manufacturing, but with Industry 4.0, they will transform production: isolated, optimized cells will come together as a fully integrated, automated, and optimized production flow, leading to greater efficiencies and changing traditional production relationships among suppliers, producers, and customers—as well as between human and machine. 


- Agricultural Digital Integration Platforms 

Digital agriculture is the use of new and advanced technologies, integrated into one system, to enable farmers and other stakeholders within the agriculture value chain to improve food production. The rise of digital agriculture and its related technologies has opened a wealth of new data opportunities. Remote sensors, satellites, and drones can gather information 24 hours per day over an entire field. These can monitor plant health, soil condition, temperature, humidity, etc. The amount of data these sensors can generate is overwhelming, and the significance of the numbers is hidden in the avalanche of that data. Companies are leveraging computer vision and deep-learning algorithms to process data captured by drones and/or software-based technology to monitor crop and soil health. Machine learning models are being developed to track and predict various environmental impacts on crop yield such as weather changes. 

Needed first and foremost in this digital ecosystem is an integrating digital platform. Much like how Apple opened up the iPhone to independent application providers, The standardized digital platform will be able to provide a hub for all agtech providers to essentially sell their wares, at the same time capturing their data and integrating it into the digital platform.This integrated platform gives farmers the ability to track their operations from several different angles, from soil moisture sensing to satellite imagery to weather data, to better make predictions and decisions on how their operations are faring. The integrating platform enables and protects stakeholder access and information; automates the development and analysis of massive bodies of data; and develops, reveals, and manages the potential costs – and revenues – of these decisions. These decisions can then be quickly implemented with greater accuracy through robotics and advanced machinery, and farmers can get real-time feedback on the impact their actions. 

Digital agriculture has the potential to transform the way we produce the world’s food but the approach is still very new, costs are high and the details of the long term benefits are rarely available. That means to secure its widespread adoption will require collaboration and consensus across the value chain on how to overcome these challenges.  


- Digital Service Platforms for Rural Economies 

The term ‘Digital Entrepreneurship’ most commonly refers to the process of creating a new - or novel - Internet enabled/delivered business, product or service. This definition includes both startups - bringing a new digital product or service to market - but also the digital transformation of an existing business activity inside a firm or the public sector.
In the developed world, the emergence of utility-based cloud computing is shifting focus from technical barriers to the business environment challenges facing digital entrepreneurs. This shift reinforces the growing importance of implementing effective policies that foster the best climate for digital service incubation, growth and successful development. However, in many rural areas and developing countries, even basic infrastructure remains a challenge, from the hardware, the network, the content, the ICT eco-system, to the skills on both consumer and business sides. 

Recently, AT&T is rolling out broadband connectivity across the rural and underserved location in the United States. It offers Internet connection with download speed of at least 10 Mbps and upload speed of at least 1 Mbps through Fixed Wireless Internet service. The connectivity is facilitated through a wireless tower and is routed via a fixed antenna placed on the customer’s home. This cost-effective Internet connection is arguably one of the best methods to deliver high-quality faster broadband to customers in underserved rural areas.  


- Digital Platform for Cultural Heritage 

Cultural heritage breathes a new life with digital technologies and the Internet. Information and Communications Technology (ICT) changes the way cultural digital resources are created, disseminated, preserved and (re)used. It empowers different types of users to engage with cultural digital resources. The people have now unprecedented opportunities to access cultural material, while the institutions can reach out to broader audiences, engage new users and develop creative and accessible content for leisure and education. New technologies bring cultural heritage sites back to life, for example through web discovery interfaces representing a wealth of information from collections (archives, scientific collection, museums, art galleries, visual arts etc.) enabling their re-use and re-purposing according to users' needs and inputs. 

Technology is rapidly evolving the operations of museums and nonprofits. Now more than ever organizations must keep abreast of the technologies irrevocably changing the way they interact with visitors and administer services. Technology is turning museums into a booming industry. 

Virtual museum (VM) is a digital entity that draws on the characteristics of a museum, in order to complement, enhance, or augment the museum through personalization, interactivity, user experience and richness of content. VM is not a real museum transposed to the web, nor an archive or a database of virtual digital assets but a provider of information on top of being an exhibition room. VM provides opportunities for people to access digital content before, during and after a visit in a range of digital ‘encounters’. VM is technologically demanding especially in terms of virtual and augmented reality and storytelling authoring tools which must covers various types of digital creations including virtual reality and 3D experiences, located online, in museums or on heritage sites. The challenge will be to give further emphasis on improving access, establishing meaningful narratives for collections and displays and story-led interpretation by the development of VM. It will also address the fundamental issues that are required to make this happen e.g. image rights, licencing and the ability of museums to support new ICT technology. Virtual Museums offer visitors the possibility to see art works residing in different places in context and experience objects or sites inaccessible to the public. 

Cultural and creative industries are the economic activities of artists, arts enterprises, and cultural entrepreneurs in the production, distribution and consumption of film, literature, theatre, dance, visual arts, broadcasting, and fashion. New digital and information and communication technologies have revolutionized the industry's production process, distribution channels, and consumption modes.  


- Digital Platforms for Interoperable and Smart Homes, Smart Buildings, Smart Environments, and Smart Grids 

Modern society is dependent on a reliable, abundant supply of energy. As our populations and cities get bigger, that demand is only set to grow. Ultimately we need smart grid technology because as the population grows the demand for electricity will only increase, but we need to cut our electricity consumption to fight global warming. Undoubtedly, new sources of power generation will be needed to meet skyrocketing world energy demand. We will need a scalable, innovative, and clean energy portfolio that meets the world’s need for reliable energy sources while considering the economic, environmental, health and climate effects of energy generation. In the mean time, the smart grid will be implemented incrementally over the next two decades as technology, pricing, policy, and regulation changes. 

When energy production is becoming decentralised and ICT is increasingly present in homes, the integration of renewable energy sources (RES) and promotion of energy efficiency should benefit from smarter homes, buildings and appliances, as well as (the batteries in) electric vehicles. Smart homes and buildings are one crucial element because system integration and optimisation of distributed generation, storage and flexible consumption will require interoperable smart technologies installed at building level. Internet of Things (IoT) enables a seamless integration of home appliances with related home comfort and building automation services allowing to match user needs with the management of distributed energy across the grid, and to gain access to benefits from Demand Response. Novel services should lead to more comfortable, convenient and healthier living environment at lower energy costs for consumers whilst enabling an active participation of consumers in the energy system and energy markets. 


- Big Data Solutions for Energy 

Tomorrow's energy grids consist of heterogeneous interconnected systems, of an increasing number of small-scale and of dispersed energy generation and consumption devices, generating huge amounts of data. The electricity sector, in particular, needs big data tools and architectures for optimized energy system management under these demanding conditions.
Digital data and analytics can reduce O&M costs by enabling predictive maintenance, which can lower the price of electricity for end users. Digital data and analytics can help achieve greater efficiencies through improved planning, improved efficiency of combustion in power plants and lower loss rates in networks, as well as better project design throughout the power system. In networks, efficiency gains can be achieved by lowering the rate of losses in the delivery of power to consumers, for example through remote monitoring that allows equipment to be operated closer to its optimal conditions, and flows and bottlenecks to be better managed by grid operators. Digital data and analytics can also reduce the frequency of unplanned outages through better monitoring and predictive maintenance, as well as limiting the duration of downtime by rapidly identifying the point of failure. This reduces costs and increases the resilience and reliability of supply. 

Artificial Intelligence (AI) is making its way into all types of industries, including the energy sector, with significant growth in the use of AI to leverage big data and draw inference from very large data sets. AI is the application of machine learning for the purposes of automation and computational support of decision-making in a complex system. AI has great potential to coordinate and optimize the use of distributed energy resources, electric vehicles, and IoT. Use of AI aligns well with the current pace of change that utilities, regulators and customers expect with improvements to common utility operations including: reliability (e.g., self-healing grids, operations improvement and efficient use of renewable resources and energy storage); safety (e.g., outage prediction and outage response); cybersecurity of systems (e.g., threat detection and response); optimization (e.g., asset, maintenance, workflow and portfolio management); and enhancements for the customer experience (e.g., faster and more intuitive interactive voice response, personalization, product and service matching); etc..  

The U.S. Capitol_IMG_0606
(The U.S. Capitol, Washington D.C., Jeff M. Wang)


- The Smart Hospital of the Future  

Smart hospitals are those that optimize, redesign or build new clinical processes, management systems and potentially even infrastructure, enabled by underlying digitized networking infrastructure of interconnected assets, to provide a valuable service or insight which was not possible or available earlier, to achieve better patient care, experience and operational efficiency. For example, very sick patients in isolation rooms can visit with holograms of their loved ones. Visitors will find their way around the hospital using an augmented reality (AR)-based indoor navigation system. Authorized medical workers will use facial recognition to enter secure areas. Patients can call a nurse and control their bed, lights, and TV with an Alexa-style voice assistant. That’s the vision. at least. 

Smart hospitals rely on interconnected advanced technology and automation to improve patient care, clinician workflow, and overall efficiency. Smart hospitals utilize health ICT infrastructure technology such as mobile devices, data analytics solutions, and cloud computing. The process of transitioning ICT infrastructure to support a smart hospital can be challenging, but hospitals need to remember that the transformation must take place in stages. Not every hospital needs to become smart in a single step. Instead, the approach they need to take is to implement smart solutions, one by one, and then allow newer solutions to integrate with existing ones in the journey toward becoming smart. 

The smart hospital framework involves three essential layers - data, insight and access. Data is being collected even today, although not necessarily from all systems in a hospital, but is not integrated together to derive ‘smart’ insight, which can be done by feeding it in to analytics or machine learning software. This insight must be accessible to the user - a doctor, a nurse, facilities personnel or any other stakeholder, through an interface including a desktop or a smartphone or similar handheld device, to empower them to make critical decisions faster, improving their efficiency. 

There are three areas that any smart hospital addresses - operations, clinical tasks and patient centricity. Operational efficiency can be achieved by employing building automation systems and smart asset maintenance and management solutions, along with improving internal logistics of mobile assets, pharmaceutical, medical device, supplies and consumables inventory as well as control over people flow (staff, patients and visitors). Not only do these solutions reduce operational costs such as energy requirements, but also reduce the need for capital expenditures on mobile assets for example, by improving utilization rates of existing equipment. Patient flow bottlenecks, when addressed, improve efficiency, allowing more patients to be ‘processed’ through the system, allowing for more revenue opportunities at lower costs.   


8. Cybersecurity and Advanced Software Engineering


- The End of Privacy

Every time you ask Amazon's Alexa, for example, to send more cat food, you are giving a billion-dollar company more information (data) about who you are. Mainly, that you have a cat who likes to eat. But also credit card information, addresses, telephone numbers, and so on. The decision has been made to sacrifice some of this potentially private information for the sake of convenience. These new technologies will become increasingly good at knowing what we want. There’s a trade-off. We will sacrifice more of our autonomy for the sake of convenience. This will only increase.

Cyber security protects the data and integrity of computing assets belonging to or connecting to an organization’s network. Its purpose is to defend those assets against all threat actors throughout the entire life cycle of a cyber attack.

We’re going to see more mega-breaches and ransomware attacks years to come. Planning to deal with these and other established risks, like threats to web-connected consumer devices and critical infrastructure such as electrical grids and transport systems, will be a top priority for security teams. But cyber-defenders should be paying attention to new threats, too.


- Why Is Cybersecurity So Important?  

[The U.S. Homeland Security]: "Our daily life, economic vitality, and national security depend on a stable, safe, and resilient cyberspace. Cyberspace and its underlying infrastructure are vulnerable to a wide range of risk stemming from both physical and cyber threats and hazards. Sophisticated cyber actors and nation-states exploit vulnerabilities to steal information and money and are developing capabilities to disrupt, destroy, or threaten the delivery of essential services."

[Cisco]: "Cybersecurity is the practice of protecting systems, networks, and programs from digital attacks. These attacks are usually aimed at accessing, changing, or destroying sensitive information; extorting money from users; or interrupting normal business processes. Implementing effective cybersecurity measures is particularly challenging today because there are more devices than people, and attackers are becoming more innovative."  


- Emerging Cyber-Threats 

Cyber security has never been simple. And because attacks evolve every day as attackers become more inventive, it is critical to properly define cyber security and identify what constitutes good cyber security.
 
[MIT]: "Here are some emerging cyber-threats should be on watch lists: 

  • Exploiting AI-generated fake video and audio: Thanks to advances in artificial intelligence (AI), it’s now possible to create fake video and audio messages that are incredibly difficult to distinguish from the real thing. These “deepfakes” could be a boon to hackers in a couple of ways. AI-generated “phishing” e-mails that aim to trick people into handing over passwords and other sensitive data. Cybercriminals could also use the technology to manipulate stock prices by, say, posting a fake video of a CEO announcing that a company is facing a financing problem or some other crisis. There’s also the danger that deepfakes could be used to spread false news in elections and to stoke geopolitical tensions. Such ploys would once have required the resources of a big movie studio, but now they can be pulled off by anyone with a decent computer and a powerful graphics card.
  • Poisoning AI defenses: Security companies have rushed to embrace AI models as a way to help anticipate and detect cyberattacks. AI can help us parse signals from noise, but in the hands of the wrong people, it’s also AI that’s going to generate the most sophisticated attacks.
  • Hacking smart contracts: Smart contracts are software programs stored on a blockchain that automatically execute some form of digital asset exchange if conditions encoded in them are met. Entrepreneurs are pitching their use for everything from money transfers to intellectual-property protection. But it’s still early in their development, and researchers are finding bugs in some of them. So are hackers, who have exploited flaws to steal millions of dollars’ worth of cryptocurrencies.
  • Breaking encryption using quantum computers: Security experts predict that quantum computers, which harness exotic phenomena from quantum physics to produce exponential leaps in processing power, could crack encryption that currently helps protect everything from e-commerce transactions to health records. Quantum machines are still in their infancy, and it could be some years before they pose a serious threat. But products like cars whose software can be updated remotely will still be in use a decade or more from now. The encryption baked into them today could ultimately become vulnerable to quantum attack. The same holds true for code used to protect sensitive data, like financial records, that need to be stored for many years."


- AI in Data Security

The security of data is crucial for every company and cyber-attacks are growing very rapidly in the digital world. AI can be used to make your data more safe and secure. Some examples such as AEG bot, AI2 Platform,are used to determine software bug and cyber-attacks in a better way. 


- Tighten Security with Better Software Development

What’s to stop someone from hacking into an online software system or application and stealing data or access to critical processes? Both the threats and the solutions depend on software. One wall of defense is secure development with a focus on quality assurance, testing and code review.  


- Social Credit Algorithms

"Social credit algorithms use facial recognition and other advanced biometrics to identify a person and retrieve data about that person from social media and other digital profiles for the purpose of approval or denial of access to consumer products or social services. In our increasingly networked world, the combination of biometrics and blended social data streams can turn a brief observation into a judgment of whether a person is a good or bad risk or worthy of public social sanction. Some countries are reportedly already using social credit algorithms to assess loyalty to the state." -- [IEEE Computer Society]

 

 

[More to come ...]


 <drafted by hhw: 5/22/2020>

 

 

Document Actions