UNIT III
EGOVERNANCE
INTRODUCTION TO E-COMMERCE
The term e-commerce was coined back in the 1960s, with the rise of
electronic commerce – the buying and selling of goods through the transmission
of data – which was made possible by the introduction of the electronic data
interchange. Fast forward fifty years and e-commerce has changed the way in
which society sells goods and services.
E-commerce has become one of the most popular methods of making
money online and an attractive opportunity for investors. For those interested
in buying an e-commerce business, this article serves to provide an
introduction to e-commerce, covering the reasons for its popularity, the main
distribution models and a comparison of the major e-commerce platforms
available.
‘E-commerce’ and ‘online
shopping’ are often used interchangeably but at its core e-commerce is much
broader than this – it embodies a concept for doing business online,
incorporating a multitude of different services e.g. making online payments,
booking flights etc.
E-Commerce refers to the paperless exchange of business
information using electronic data interchange, electronic mail, electronic
bulletin boards, electronic funds transfer, worldwide web and other network
based technologies.
E-Commerce is the business environment in which information for
the buying, selling and transportation of goods and services moves
electronically. E-Commerce includes any technology that enables a company to do
business electronically.
E-Commerce is the application of communication and information
sharing technologies among trading partners to pursuit of business objectives.
E-Commerce is associated with the buying and selling of information, products
and services via computer networks. It is a new way of conducting, managing and
executing business transactions using computer and telecommunication networks.
Some of the direct benefits of Electronic Commerce are:
Improved Productivity
Cost Savings
Streamlined Business Processes
Better Customer Service
Opportunities for New Businesses
Using electronic commerce, the time required creating,
transferring and process a business transaction between trading partners is
significantly reduced. Furthermore, human errors and other problems like
duplications of records are largely eliminated with the reduction of data entry
and re-entry in the process. This improvement in speed and accuracy, plus the
easier access to document and information, will result in increase in
productivity.
2) Cost Savings:
Based on the experience of a wide variety of early adopters of
electronic commerce. Forrester Research has estimated that doing business on
the Internet can result in cost savings of about 5% to 10% of sales. This cost
savings stem from efficient communication, quicker turnaround time and closer
access to markets.
3) Streamlines Business Processes:
Cost savings are amplified when businesses go a step further and
adapt their internal processes and back-end legacy systems to take advantage of
electronic commerce. Inventories can be shaved if businesses use the Internet
to share such information as promotional plans, point-of-sale data, and sales
forecasts. Business processes can also be made more efficient with automation.
4) Better Customer Service:
With electronic commerce, there is better and more efficient
communication with customers. In addition, customers can also enjoy the
convenience of shopping at any hour, anywhere in the world.
5) Opportunities for New Businesses:
Businesses over the Internet have a global customer reach. There
are endless possibilities for businesses to exploit and expand their customer
base.
E-COMMERCE
FRAMERWORK:
The term E-commerce Framework is related to software frameworks
for e-commerce applications. They offer an environment for building e-commerce
applications quickly.
E-Commerce frameworks are flexible enough to adapt them to your
specific requirements. As result, they are suitable for building virtually all
kinds of online shops and e-commerce related (web) applications like the Aimeos
E-commerce Framework does.
An E-commerce framework must provide
Common business services, for facilitating the buying and selling
process.
Messaging and information distribution, as a means of sending and
retrieving information.
Multimedia content and network publishing, for creating a product
and a means to communicate about it.
The Information Superhighway – the very foundation – for providing
the highway system along which all e-commerce must travel.
The two pillars supporting all e-commerce-applications and
infrastructure are just as indispensable:
Public policy, no govern such issues as universal access, privacy,
and information pricing.
Technical standards, to dictate the nature of information
publishing, user interfaces, and transport in the interest of compatibility
across the entire network.
Examples of E-commerce frameworks are
Aimeos (Laravel, Symfony, TYPO3, SlimPHP, Flow)
Spryker (Symfony only)
Sylius (Symfony only)
E-Commerce Applications
Supply chain management Video on demand
Remote Banking
Procurement and purchasing
Online marketing and advertising Home shopping
They provide an overall structure for e-commerce related
applications. Furthermore, they implement the general program flow e.g. how the
checkout process works. Contrary to monolithic shop systems, existing program
flow can not only be extended but completely changed according to your needs.
ANATOMY
OF E-COMMERCE APPLICATIONS
Multimedia Content for E-Commerce Applications
Multimedia Storage Servers & E-Commerce Applications
Client-Server Architecture in Electronic Commerce
Internal Processes of Multimedia Servers
Video Servers & E-Commerce
Information Delivery/Transport & E-Commerce Applications
Consumer Access Devices
Multimedia Content for E-Commerce Applications:
•Multimedia content can be considered both fuel and traffic for
electronic commerce applications.
•The technical definition of multimedia is the use of digital data
in more than one format, such as the combination of text, audio, video, images,
graphics, numerical data, holograms, and animations in a computer
file/document.
•Multimedia is associated with Hardware components in different
networks.
•The Accessing of multimedia content depends on the hardware
capabilities of the customer.
Multimedia Storage Servers & E-Commerce Applications:
•E-Commerce requires robust servers to store and distribute large
amounts of digital content to consumers.
•These Multimedia storage servers are large information warehouses
capable of handling various content, ranging from books, newspapers,
advertisement catalogs, movies, games, & X-ray images.
•These servers, deriving their name because they serve information
upon request, must handle large-scale distribution, guarantee security, &
complete reliability
Client-Server Architecture in Electronic Commerce
All e-commerce applications follow the client-server model
Clients are devices plus software that request information from
servers or interact known as message passing.
Mainframe computing, which meant for “dump”.
The client server model, allows client to interact with server
through request-reply sequence governed by a paradigm known as message passing.
The server manages application tasks, storage & security &
provides scalability ability to add more clients and client devices (like
Personal Digital Assistants to PC’s)
Internal Processes of Multimedia Servers
The internal processes involved in the storage, retrieval &
management of multimedia data objects are integral to e-commerce applications.
A multimedia server is a hardware & software combination that
converts raw data into usable information & then dishes out.
It captures, processes, manages, & delivers text, images,
audio & video.
It must do to handle thousands of simultaneous users.
Include high-end symmetric multiprocessors, clustered architecture,
and massive parallel systems.
Video Servers & E-Commerce:
The electronic commerce applications related to digital video will
include
Telecommunicating and video conferencing.
Geographical information systems that require storage &
navigation over maps.
Corporate multimedia servers.
Postproduction studios.
Shopping kiosks.
Consumer applications will include video-on-demand.
The figure which is of video–on demand consist video servers, is
an link between the content providers (media) & transport providers (cable
operators)
Information Delivery/Transport & E-Commerce Applications
Information Transport
Providers |
Information Delivery
Methods |
Telecommunication companies |
Long-distance telephone lines;
local telephone lines |
Cable television companies |
Cable TV
coaxial, fiber optic
& satellite lines |
Computer-based on-line
servers |
Internet; Commercial online
service providers |
Wireless communications |
Cellular & radio
networks; paging systems |
Transport providers are
principally telecommunications, cable, & wireless industries.
Consumer Access Devices
Information Consumers |
Access Devices |
Computer with Audio & Video
capabilities. |
Personal/desktop computing,
Mobile Computing |
Telephonic devices |
Videophone |
Consumer electronics |
Television + Set-top Box, Game
Systems |
Personal Digital Assistance
(PDAs) |
Pen-based computing,
voice-driven computing |
NSFNET: ARCHITECTURE AND COMPONENTS:
National Science Foundation (NFS) has created five super computer
centers for complex and wider range of scientific explorations in mid-1980s.
Until then, supercomputers were limited to military researchers and other who
can afford to buy.
NSF wanted to make supercomputing resources widely available for
academic research. And the logic is that the sharing of knowledge, databases,
software, and results was required. So NSF initially tried to use the ARPANET,
but this strategy failed because of the military bureaucracy and other staffing
problems. So, NSF decided to build its own network, based on the ARPANET's IP
technology.
The NFSNER backbone is initially connected to five supercomputing
networks with initial speed 56 kbps telephone leased lines. It was considered
fast in 1985 but it is too slow according to modern standards.
Since every university could not be connected directly to the
center, need of access structure was realized and accordingly each campus
joined the regional network that was connected to the closest center. With this
architecture, any computer could communicate with any other by routing the
traffic through its regional networks, where the process was reserved to reach
the destination. This can be depicted in the three level hierarchical models as
shown in the figure1:
This abstraction is not completely accurate because it ignores
commercial network providers, international networks, and interconnections that
bypass the strict hierarchy.
Water distribution systems may be useful analogy in understanding
the technology and economics of the NSFNET program.
We can think of the data circuits as pipes that carry data rather
than water.
The cost to an institution was generally a function of the size of
the data pipe entering the campus.
The campuses installed plumbing and appliances such as computers,
workstations and routers. And Service cost as an infrastructure cost such as
classrooms, libraries and water fountains.
But there is no extra charge for data use.
The mid-level networks acted like cooperatives that distributed
data from the national backbone to the campuses. They leased data pipes from
the telephone companies, and added services and management. So each member
could access the pipe and either consume or send data.
Some funding was also provided by the federal government.
This model was a huge success but became a victim of its own
success and was no longer effective. One main reason for it was-the network's
traffic increased until, eventually, the computer controlling the network and
the telephone lines connecting them became saturated. The network was upgraded
several times over the last decade to accommodate the increasing demand.
The NSFNET Backbone
The NSFNET backbone service was the largest single government
investment in the NSF-funded program. This backbone is important because almost
all network users throughout the world pass information to or from member
institutions interconnected to the U.S. NSFNET.
The current NSFNET backbone service dated from 1986, when the
network consisted of a small number of 56-Kbps links connecting six nationally
funded supercomputer centers. In 1997, NSF issued a competitive solicitation
for provision of a new, still faster network service.
In 1988, the old network was replaced with faster telephone lines,
called T-1 lines that had a capacity of 1.544 Mbps compared to the earlier 56
Kbps, with faster computers called routers to control the traffic.
By the end of 1991, all NSFNET backbone sites
were connected to the new ANS-provided T-3 backbone with 45 Mbps capacity.
Initial 170 networks in July 1988 to over 38,000 and traffic of initial 195
million packets to over 15 terabytes. Discussions of electronic commerce were
due to the economic factor. The cost to the NSF for transport of information
across the network decreased.
It fell from approximately $10 per megabyte in 1987 to less than
$1.0 in 1989. At the end of 1993, the cost was 13 cents. These cost reduction
occurred gradually over a six-year period. Cost reductions were due to new
faster and more efficient hardware and software technologies.
Mid-Level Regional Networks
Mid level Regional Networks are often referred to as regional
networks, are one element of the three-tier NSFNET architecture.
They provide a bridge between local organizations, such as
campuses and libraries, and the federally funded NSFNET backbone service.
The service of Mid Level Regional Networks tends to vary from sub
state, statewide and multistate coverage.
State and Campus Networks
State and campus networks link into regional networks.
The mandate for state networks is to provide local connectivity
and access to wider area services for state governments, K-12 schools, higher
education, and research institutions.
Campus networks include university and college campuses, research
laboratories, private companies, and educational sites such as K-12 school
districts.
These are the most important components of the network hierarchy,
as the investment in these infrastructures far exceeds that of the government's
investments in the national and regional networks.
NATIONAL RESEARCH AND EDUCATION NETWORK
The NSFNET has evolved into the National Research and Education
Network (NREN). The NREN is a five-year project approved by Congress as part of
the High Performance Computing and Communications Acts in fall 1991. NREN
represents the first phase of the HPCC project. The intent is to create a
next-generation Internet to interconnect the nation’s education and research
communities at more than one gigabit (one billion bits per second) data rates,
thereby facilitating enhanced access to information resources and computational
capabilities.
Development and deployment of NREN is planned to occur in three
phases. The first phase begun in 1988, involved upgrading all telecommunication
links within the NSFNET backbone to 1.544 Mbps (T-1). This upgrade has been
completed for most agencies. In phase two which began in 1991, the NSFNET
backbone was upgraded to 45 Mbps (T-3). The second phase also provides upgraded
services for 200 to 300 research facilities directly linked to this backbone.
The third phase, which will result in a phased implementation of a
gigabit-speed network operating at roughly 20-50 time T-3 speeds, to expected
to begin during the mid 1990s if the necessary technology and funding are
available.
NREN activities can be broadly split into two classifications:
Establishment and deployment of a new network architecture for
very high bandwidth networks (vBNS)
Research to yield insights into the design and development of
gigabit network technology.
GLOBALIZATION OF THE ACADEMIC INTERNET
By the late 1980s, the Internet had spread globally, including
Canada, Australia, Europe, South Africa, South America, Asia and Japan. Today
the global network environment reaches over 140 countries. Asian countries see
the Internet as way of expanding business and trade. Eastern European
countries, longing for western scientific ties, have wanted to participate and
development is progressing rapidly. Other countries see the internet as a way
to raise their education and technology levels.
At present, the Internet’s international expansion is hampered by
the lack of good supporting infrastructure, namely, a decent telephone system.
International Computer Networks:
In 1970, United Kingdom and Norway were connected to the ARPANET.
National Network Project was JANET (Joint Academic Network) in
United Kingdom, JUNE in Japan, DFN in Germany, UNINET in Norway and SDN in
Korea.
In 1980, CSNET, BITNET (Because It’s Time
Network) and UUCP (UNIX and Unix Copy) all developed international links.
In 1984, CSNET was operating e-mail gateways between USA, Canada,
Korea, Israel, Japan, France, Germany, Australia and Scandinavia.
NSFNET and European networks are connected by two high speed
circuits linking the NSFNET at New York to INRIA.
In 1989, RIPE (Reseaux IP European) began coordinating the
Internet operation in Europe.
In 199, other international links to NSFNET were established. The
connection between California’s regional network CERFnet and UFRJ is intended
to provide Internet access to a regional network located within the state of
Rio de Janeiro.
NSFNET in November 1991 with a 64 Kbps satellite link to the
CERFnet via the Mexican satellite was brought online.
China was CNPAC (China National Public Data Network) was designed
to carry data at speeds varying between 1.2 and 9.6 Kbps.
INTERNET GOVERNANCE: THE INTERNET
SOCIETY
No one body controls the Internet. In effect, the system itself
polices such things: if any organization strays from the collective standards,
it loses the benefits of global connectivity which was the whole point of
becoming part of the Internet. Groups do exist that carry out central
management functions for the Internet, such as the InterNIC (www.internic.net),
which, among other things, registers companies that are connected to the
Internet, and the Internet Society (www.isoc.org). The Internet Society has
various engineering committees that help make technical recommendations for the
future development of the Internet. But none has the power to force a
particular direction or action on the Internet community.
The ultimate authority for the technical direction of the Internet
rests with the Internet Society (ISOC). This professional society is concerned
with the growth and evolution of the worldwide Internet. It is a voluntary
organization whose goal is to promote global information exchange. The four
groups in the structure are the ISOC and its Board of Trustees, the Internet
Architecture Board (IAB), the IESC, and the IETP itself.
ISOC appoints a council IAB that has responsibility for the
technical management and direction of the Internet. The IAB is responsible for
overall architectural considerations in the Internet. It is also serves to
adjudicate disputes in the standards process and is responsible for the setting
the technical direction, establishing standards, and resolving problems in the
Internet. IAB also keeps track of various network addresses. Each host computer
has a unique 32 bit address called an IP Address; no two
computers in the world can have the same address.
The IAB is supported by the Engineering Task Force (IETF), the
protocol engineering and development arm of the Internet. The IETF is a large
open international community of the network designers, operators, vendors, and
researchers concerned with the evolution of the Internet architecture and the
smooth operation of the Internet.
The internal management of the IETF is handled by the area directors.
Together with the chair of the IETF, they form the Internet Engineering
Steering Group (IESG). The operational management of the Internet standards
process is handled by the IESG under the auspices of the Internet Society.
AN OVERVIEW OF INTERNET APPLICATIONS
To understand why the Internet is being commercialized, we need to
understand what Internet applications people are interested in and are actively
seeking. The Internet provides a broad range of services to address a variety
of user needs:
Individual to group communication, group conferencing,
tele-meeting services, with interactive multimedia and conferencing,
negotiation, decision support systems; mailing lists, list services – for
research collaboration and distance education across institutional, state and
national boundaries.
Information transfer and delivery services, text-based e-mail,
multimedia e-mail, e-mail/fax interface e-mail/EDI interface; news
groups/bulletin boards/directories; digital audio and video communication.
Information databases, access to citation,
full-text databases and “virtual” libraries containing both text and multimedia
information. These databases are accessible using Internet tools like Gopher,
World Wide Web, file transfer, remote log-in, resource discovery services, and
news-gathering agents.
Information processing services, remote access to a variety of
software programme including operations research (OR) tools, statistics,
simulation and visualization tools.
Resource-sharing services, access to printers, fax machines, and
rather processing services that enable the utilization of spare capacity on
underutilized machines.
No comments:
Post a Comment