- Get link
- X
- Other Apps
The beginning of edge computing can be traced back to the 1990s, when the first content delivery networks (CDNs) were created. CDNs are networks of servers that are deployed close to end users, which allows them to deliver web and video content more quickly and efficiently.
In the early 2000s, CDNs began to host applications and
application components, which marked the beginning of commercial edge
computing. This led to the development of new applications that required low
latency and real-time processing, such as dealer locators, shopping carts, and
ad insertion engines.
The rise of the Internet of Things (IoT) in the 2010s
further accelerated the development of edge computing. IoT devices make a
massive amount of data that needs to be treated quickly and efficiently. Edge
computing allows this data to be processed closer to the source, which reduces
latency and improves performance.
Today, edge computing is a key technology for a wide
range of applications, including:
Smart cities
Industrial automation
Connected cars
Virtual reality
Augmented reality
As the number of connected plans continues to grow, the
demand for edge computing is expected to increase. Edge computing will play a dangerous
role in the future of computing, enabling us to build more responsive,
efficient, and secure applications.
Here are some of the key reasons why edge computing was
created:
To reduce bandwidth costs.
To improve latency.
To enable real-time processing.
To improve security.
To comply with regulations.
Edge computing has a number of advantages over outdated
cloud computing, including:
Reduced latency: Edge computing can significantly reduce
latency by processing data closer to the source. This is important for
applications that need real-time processing, such as self-driving cars and
medical devices.
Improved performance: Edge computing can improve the
performance of applications by offloading processing tasks from the cloud to
local devices. This can free up cloud resources for other applications and
improve the overall presentation of the network.
Increased security: Edge computing can improve security by
reducing the amount of data that needs to be transmitted over the network. This
can make it more difficult for attackers to intercept and steal data.
Compliance: Edge computing can help organizations comply with
regulations that require data to be processed & stored within a certain
geographic region.
Overall, edge computing is a powerful technology that can
provide a number of benefits for businesses and organizations. As the number of
connected devices lasts to grow, the demand for edge computing is expected to
increase.
When was the first edge computing introduced?
The first edge computing was introduced in the 1990s with
the creation of the first content delivery network (CDN). CDNs are networks of
servers that are deployed close to end users, which allows them to deliver web
and video content more quickly and efficiently.
The first CDN, Akamai, was founded in 1998 by a team of MIT
researchers. Akamai's goal was to reduce the bandwidth costs associated with
moving raw data from where it was created to either an enterprise data center
or the cloud.
Akamai's CDN worked by caching web and video content on
servers that were located close to end users. This allowed Akamai to deliver
content more quickly and efficiently, which saved businesses money on bandwidth
costs.
Akamai's CDN was a major breakthrough in the field of edge
computing. It showed that it was possible to deliver content more quickly and
efficiently by caching it on servers that were located close to end users. This
paved the way for the development of more advanced edge computing technologies
that are used today.
Here are some of the key milestones in the history of
edge computing:
1998: Akamai is founded and launches the first CDN.
2000: Cisco introduces the term "fog computing" to
describe a distributed computing architecture that is similar to edge
computing.
2009: The term "edge computing" is first used in a
published paper.
2012: Amazon Web Services launches its Elastic Compute Cloud
(EC2) Edge Zones, which are edge computing-enabled regions.
2015: Microsoft launches its Azure Stack Edge, which is an
edge computing platform that runs on Azure.
2016: Google launches its Anthos Edge, which is an edge computing
platform that runs on Kubernetes.
Today, edge computing is a rapidly growing field. It is
being used in a wide range of applications, including smart cities, industrial
automation, and connected cars. As the number of connected devices continues to
grow, the demand for edge computing is expected to increase.
Who is the inventor of edge computing?
There is no single inventor of edge computing. The concept
of edge computing has evolved over time, and it is the result of the work of
many different people and organizations.
Tom Leighton: Leighton is a computer scientist who is best
known for his work on content delivery networks (CDNs). In 1998, Leighton
co-founded Akamai, which is one of the world's leading CDNs. Akamai's CDN is a
key example of edge computing, as it delivers content to end users from servers
that are located close to them.
Cisco Systems: Cisco is a networking company that has been a
major player in the development of edge computing. In 2000, Cisco introduced
the term "fog computing" to describe a distributed computing
architecture that is similar to edge computing. Cisco has also developed a
number of edge computing products, such as its Fog Computing Reference
Architecture.
Intel: Intel is a semiconductor company that has been a
major contributor to the development of edge computing hardware. Intel has
developed a number of edge computing processors, such as its Atom x6000 series.
These processors are designed to be used in edge computing devices, such as
routers, gateways, and smart sensors.
- Get link
- X
- Other Apps
Comments
Post a Comment