P2P2003 has scheduled two keynote sepakers (Colin Upstill and Bill Yeager) and two invited speakers (Martin King and Jakka Sairamesh).
Affiliation: IT Innovation Centre, Southampton, UK
Grid Security: Lessons for Peer-to-Peer Systems
Dr Colin Upstill has over 25 years experience at the leading-edge of IT. Until the mid-1980s, he undertook wide-ranging research in computational
physics and applied mathematics, initially at the University of Bristol
and latterly at the University of Cambridge, where he was IT Lecturer in
the Department of Applied Mathematics and Theoretical Physics, and a
Fellow of Emmanuel College. He then moved to industry, and from 1987 to
1991 was Chief Engineer at Plessey Research and Technology, Roke Manor,
one of the UK's foremost electronic systems and software consultancies. He
is now Managing Director of the IT Innovation Centre.
The vision of the Grid is to provide a computational infrastructure supporting flexible, secure, co-ordinated resource sharing among dynamic collections of individuals, institutions, and resources . Interest in the Grid has increased as major science programmes look to Grid technology to provide for their computing needs. This has led to substantial investment in the Grid by vendors and governments, notably through the UK e-Science programme and similar programmes in other nations, and more recently at European level. As a result, far more people are joining the effort to develop Grid infrastructure and applications.
The Grid by its nature involves access to computer systems and data outside one's own company or institution. Security is therefore a major element in any Grid infrastructure, as it is necessary to ensure that only authorised access is permitted. However, early developments of the Grid were strongly motivated by the performance benefits of sharing resources, and Grid security models were designed not to interfere with this. We show by comparison with mainstream e-Commerce experience that early Grid security models exhibit several weaknesses .
The early development of the Grid also largely failed to take account of operational realities such as network administrator responsibilities and network devices such as firewalls. Early Grid systems were simply not operable outside academic institutions and closed research networks, and we contend that the most common strategy for making them work "in the real world" represents a short-term fix that is likely to produce conflict between users and application developers on the one hand, and those responsible for network administration and security on the other. We believe that the peer-to-peer community is also likely to face similar conflicts between its decentralised management approach and the day-to-day concerns of those entrusted to maintain our security.
IT Innovation is playing a leading role in the UK E-Science Programme and the exploitation of Grids for industrial and commercial purposes in the European Framework programmes. We have found it necessary to propose and begin development of radical solutions to some of these problems, including "proxy-free" delegation models and semantically-aware firewalls.
There are numerous other problems with the operational security of Grid systems, such as the scalability and appropriateness of Grid authorisation management systems, etc. The Grid community is seeking to address these through a range of developments to support the virtual organisation as depicted by Foster et al in . Most of these challenges will be familiar to the peer-to-peer community, and experience from the Grid suggests there are no easy solutions.
Our main concern today is that even more basic problems like proxy-free delegation and firewall-friendly operation appear to require quite radical technical solutions. The difficulties experienced by the early Grid pioneers are not technical in nature, but stem from human, organisational and in some cases legal aspects of operating a decentralised infrastructure. The Grid Security Arms Race is a direct consequence of adopting solutions that don't take these non-technical and operational issues fully into account.
Technical people can always find technical solutions to their current problems, but unless the operational needs of organisations are taken into account, such solutions will not provide us with a sustainable future in either Grid or peer-to-peer infrastructure.
Affiliation: Sun Microsystems
Enterprise Strength Security on a JXTA P2P Network
Bill Yeager has a career in software engineering, and computer science that spans nearly 40 years. His last 28 years have been spent at Stanford University, 19 years, and Sun Microsystems, 9 years. At Stanford among his many accomplishments, he is best known for having invented the multiple protocol ethernet router in 1982 that was licensed by cisco systems in 1987; co-invented the Intermediate Mail Access Protocol that later became IMAP; And, having written a serial line ftp program, ttyftp, that was later rewritten at Columbia University, and renamed Kermit.
At Sun Bill invented, architected, and with a small team, developed the Sun Internet Message Server (SIMS) software for which he his filed four
patents, and received three; Invented, and programmed the iPlanet Wireless Server; Led Sun's WAP Forum team; Architected a security model for Java Mobile Phones that is incorporated in MIDP 2.0; And, was the CTO of Sun's Jxta team, there inventing and writing the code for the Jxta Security Model. Bill has filed 26 patents on his Jxta work. Bill is now at Sun Labs. In his current project, "The Virsona," personal privacy is fundamental, Virsona's are Jxta peers, and his is initial objective is to tweak the Jxta security model to support enterprise
Finally, Bill is the co-chair of the recently formed Internet Research Task Force Research Group on P2P.
When one begins to think about security and P2P networks, and in
particular, ad-hoc P2P networks with no real centralization, one must
take a leap from the accepted, in place, on-the-Internet, security
practices into the unknown. There are potentially billions of peer
nodes, some related, and some not, all possibly vulnerable to attack in
a multitude of ways: Impersonation attacks and thus identity theft by
unauthorized or falsely authorized parties; Invasion of privacy and all
that that carries with it; Loss of data integrity; And repudiation of
previous transactions, "Hey, no way, I did not say that!" We imagine
the equivalent of anti-matter, a complete negation of the fundamental
principles of security, or the anti-secure net. Those among us with a
strong interest in the secure net, and making P2P not only an accepted
but preferred way of both doing business in the Enterprise as well as
protecting the personal privacy of the innocent users of P2P software
require a toolbox with sockets, and a socket wrench that is capable of
applying the torque that is appropriate to each scenario we wish to
It is easy enough for each peer node to be its own certificate
authority, create its own root and service certificates, distribute the
root certificate out-of-band or in some cases in-band, different
sockets for different scenarios, and then use transport layer security
to insure two way authorization and privacy. Then again, one cannot
help think about Philip Zimmermann, PGP, and "webs-of-trust." This is
surely another socket that can be used by small communities of peers to
assure that the public keys that they distribute can be trusted with
some degree of certainty based on the reputation of the signers.
If we imagine small groups of peers on a purely ad-hoc P2P network, for
example, a family, then either mom or dad might be the certificate
authority, place their root certificate on each family member's system
by infra-red, eyeball-to-eyeball communication, and yes, if a
certificate is signed by the CA, you trust it or else. One more socket
for our toolbox.
Finally, without actually using a recognized CA, one can apply even
more torque to tighten the security on a P2P network. Select one or
more well protected and trusted systems, and give to them
certificate-granting authority. These systems are unlike standard CA's
in the sense that they are peers in the P2P Network, albeit, special
peers. Each peer using these CA's boots with the appropriate root
certificates, and acquires a root certificate from one of the CA's
using a Certificate Signing Request. Furthermore, to acquire a
certificate the peer must be authorized perhaps by using an LDAP
directory with a recognized protected password. Here, the CA can also
use a secure connection to a corporate LDAP service to authorize
In the end, each of the above scenarios, each socket in our mythical
toolbox, is a not so mythical. This is how Project JXTA approaches
security, and what we will discuss in this keynote presentation.
Affiliation: Quick Com
Dynamic Distributed Overlay Networking of IP Multicast
Martin King is one of the leading authorities in Peer-to-Peer technologies and communications. He has extensive international experience in communications technology and the software and hardware industries. Educated in England in microelectronics and electronic engineering, his skills span technology development and implementation, and business development. He holds several patents in the fields of broadcasting, communications and power management circuit design, areas in which he has published numerous articles. He has held various executive management positions with technology companies, and continues to assist several high tech companies as a consultant and board member. Martin King is President and CEO of Quick Com, the Swiss-based company he founded in 1998, now the rapidly emerging leader in the field of serverless Peer-to-Peer business communications software. Highly reputed for his industry vision in the fields of Peer-to-Peer Computing, telecommunications and broadband media distribution, Martin King is a regular speaker at international conferences.
Mastering of overlay networking is one of the fundamental challenges in the distributed computing and peer-to-peer arena. While the principle applications of Search and Retrieval have not been intrinsically focused on the communication and transport of data between peers many of the challenges such as network flooding, denial of service that application developers have needed to overcome are related directly to data traffic flow and management.
The IP Internet Protocol is used to create a common overlay network on top of many existing networks using other transport protocols to suite their physical requirements. A technique known as tunneling, where the payload of an IP data packet carries another full IP packet including its header information, is an overlay networking method for creating a VPN Virtual Private Network on top of the existing Internet or IP based network.
VPNs are typically edge-based requiring static real IP addresses at each LAN gateway on the Internet side. Such solutions are cumbersome and inflexible requiring considerable additional infrastructure and reconfiguration of network components. VPNs for the remote user usually involve connection into a hub as a single branch of a network star formation.
A number of protocols are used on the Internet today for different purposes HTTP, HTTPs, SSL, TCP, UDP to name just a few. Tunnels can be created using these protocols to passively enable overlay networking in the IP environment. For example the establishment of an HTTP tunnel can be used to bridge a firewall whereas SSL can be applied where added security is required.
An approach will be presented for generically creating IP Multicast Overlay networks in a fully distributed environment. This brings several advantages: a) Nodes can be dynamically meshed together resulting in load balancing of the network traffic and the creation of a web on top of the web. b) Multicast can now be communicated from any node to any multiple selection of neighboring nodes in an "Any to Many" fashion, rather than the traditional centralized approach of "One to Many". This is consistent with many of the distributed media-streaming solutions being discussed. c). In a distributed environment overlay networks can be constructed with No Single Point of Failure. This is an important step towards fully fault tolerant distributed software infrastructure. d). The application developer is equipped with a Communications Middleware that permits the establishment of a true VPN without the need to configure additional network infrastructure. e) A turn-key solution for empowering the end-user to establish an overlay network without compromising the network administrator or service provider. The approach realised is applicable to broadband, narrowband, mobile and static environments. Finally some user case studies for applying this technology will be presented.
Affiliation: IBM T. J. Watson Research
Role of P2P in complex disconnected networks: applications to Mobile e-Business and Enterprise Computing
Dr. Jakka Sairamesh is a project manager and senior research staff member
at IBM Watson Research, Hawthorne, New York. He obtained his M.S and M.
Phil. from Columbia University in 1992, and Ph.D. from Columbia University
in 1995. Since then has been working with IBM Research on e-business
platforms, auctions and trading systems, mobile e-business, information
economies, and decentralized computing systems. He has published numerous
papers on trading systems, e-business platforms, market-based control, and
information economies and cities. He was one of the architects from
Research for IBM's e-business products such as Websphere Marketplace
Edition and Websphere Business Edition, and recently helped drive the
vision, strategy and architecture for Business Portals for Value-chain
management and collaboration. He currently leads projects in the areas of
dynamic e-business solutions, early warning systems, large-scale mobile
and peer-to-peer computing, dynamic resource management in Grids, and
With the tremendous advances in hand-held computing and communication
capabilities, rapid proliferation of mobile computing environments, and
decreasing device costs, we are seeing a growth in mobile e-business in
various consumer and business markets. In the coming years we expect
hundreds of millions of users worldwide to depend on mobile devices and
computers for social and business activities. The trend is also picking up
in large enterprises where dependency on mobile e-business environments
has become routine for thousands of employees conducting daily business
activities. For such a large collection of users, new reliable methods of
search-find-obtain-transact (SFOT) have to be devised that are
decentralized and self-managing.
In this paper, we present the grand challenges involved in designing and
implementing reliable and transparent e-business applications in a
decentralized and autonomous fashion that can handle business-critical
applications in Enterprises and virtual organizations. We present a novel
architecture and framework for end-to-end mobile e-business applications
based on concepts from P2P, dynamic socio-economic networks (overlays) and
resource allocation from Grid computing. The architecture takes into
consideration disconnection, application context, synchronization,
transactions and self-managing failure recovery modes to provide mobile
users with seamless and transparent access to data and resources when
needed. In addition, the mechanisms designed consider decentralized
resource management for providing data availability and fast access. In
our architecture, we consider a business process design based on
state-machines and failure event management to handle disconnection,
resource limitations and failures in open dynamic networks. We consider
simple cost-based P2P networks as the underlying framework for connecting
millions of mobile devices to thousands computational servers. A
first-version of this system was designed, implemented and deployed a
network of mobile clients using open Webservices standards.