The Cover PagesThe OASIS Cover Pages: The Online Resource for Markup Language Technologies
SEARCH | ABOUT | INDEX | NEWS | CORE STANDARDS | TECHNOLOGY REPORTS | EVENTS | LIBRARY
SEARCH
Advanced Search
ABOUT
Site Map
CP RSS Channel
Contact Us
Sponsoring CP
About Our Sponsors

NEWS
Cover Stories
Articles & Papers
Press Releases

CORE STANDARDS
XML
SGML
Schemas
XSL/XSLT/XPath
XLink
XML Query
CSS
SVG

TECHNOLOGY REPORTS
XML Applications
General Apps
Government Apps
Academic Apps

EVENTS
LIBRARY
Introductions
FAQs
Bibliography
Technology and Society
Semantics
Tech Topics
Software
Related Standards
Historic
Last modified: May 31, 2003
XML Articles and Papers May 2003

XML General Articles and Papers: Surveys, Overviews, Presentations, Introductions, Announcements

Other collections with references to general and technical publications on XML:

May 2003

  • [May 31, 2003] "Introducing WS-Transaction, Part 1. The Basis of the WS-Transaction Protocol." By Dr. Jim Webber and Dr. Mark Little (Arjuna Technologies Limited). In Web Services Journal Volume 3, Issue 6 (June 2003), pages 28-33 (with 6 figures and source code). WSJ Feature Article. "In July 2002, BEA, IBM, and Microsoft released a trio of specifications designed to support business transactions over Web services. These specifications, BPEL4WS, WS-Transaction, and WS-Coordination, together form the bedrock for reliably choreographing Web services-based applications, providing business process management, transactional integrity, and generic coordination facilities respectively. In our previous article we introduced WS-Coordination, a generic coordination framework for Web services, and showed how the WS-Coordination protocol can be augmented to support coordination in arbitrary application domains. This article introduces the first publicly available WS-Coordination-based protocol -- Web Services Transaction -- and shows how WS-Transaction provides atomic transactional coordination for Web services... An important aspect of WS-Transaction that differentiates it from traditional transaction protocols is that a synchronous request/response model is not assumed. This model derives from the fact that WS-Transaction is layered upon the WS-Coordination protocol whose own communication patterns are asynchronous by default... WS-Coordination provides only context management -- it allows contexts to be created and activities to be registered with those contexts. WS-Transaction leverages the context management framework provided by WS-Coordination in two ways. First, it extends the WS-Coordination context to create a transaction context. Second, it augments the activation and registration services with a number of additional services (Completion, Completion WithAck, PhaseZero, 2PC, Outcome Notification, BusinessAgreement, and BusinessAgreementWithComplete) and two protocol message sets (one for each of the transaction models supported in WS-Transaction) to build a full-fledged transaction coordinator on top the WS-Coordination protocol infrastructure... In common with other transaction protocols (like OTS and BTP), WS-Transaction supports the notion of the service and participant as distinct roles, making the distinction between a transaction-aware service and the participants that act on behalf of the service during a transaction: transactional services deal with business-level protocols, while the participants handle the underlying WS-Transaction protocols... No one specific protocol is likely to be sufficient, given the wide range of situations that Web service transactions are likely to be deployed within. Hence the WS-Transaction specification proposes two distinct models, each supporting the semantics of a particular kind of B2B interaction... As with WS-Coordination, the two WS-Transaction models are extensible, allowing implementations to tailor the protocols as they see fit, e.g., to suit their deployment environments..." Previous article: "Introducing WS-Coordination. A Big Step Toward a New Standard," by Jim Webber and Mark Little (Arjuna Technologies Limited). In WebServices Journal Volume 3, Issue 5 (May 24, 2003). [alt URL]

  • [May 30, 2003] "Reconstructing J2EE: Java Business Integration Meets the Enterprise Service Bus. Changing the Economics of Integration." By Dave Chappell (Sonic Software; Web Services Journal Technical Editor). In Web Services Journal Volume 3, Issue 6 (June 2003), pages 12-16 (with 5 figures). WSJ Feature Article. "The Java Business Integration (JBI) specification JSR 208 has set out to define a loosely coupled integration model that aligns with Web services-style distributed computing. The JBI expert group hopes to utilize existing J2EE integration components such as the Java Message Service (JMS), J2EE Connector Architecture (JCA), and the J2EE 1.4 Web services APIs, and add a host of new capabilities as well. The charter of the expert group promises to define an open-ended, pluggable architecture that will support both industry-standard integration components as well as proprietary elements through the use of standardized interfaces. Coincidentally, the ESB addresses similar goals. It is based on today's established standards, and has real implementations that have been shipping for at least a year. JBI can learn a great deal from what the ESB approach offers... An ESB is based on a distributed, federated network model. It takes on the responsibility of reliable communication within the network with a common security model, and provides the integration capabilities that allow applications to easily interoperate. Integration capabilities such as data transformation are themselves implemented as services and can be independently deployed anywhere within the network. The result is precise deployment of integration capabilities at specific locations, which can then be scaled independently as required. It also means that services can easily be upgraded, moved, or replaced without affecting applications. Intelligent routing based on content is accomplished through specialized services that apply XPATH expressions to identify a document type and route it based on values within the document. Routing of messages is also accomplished using a message itinerary. Think of a message itinerary as a list of destinations that travels with the message as it moves from service to service, or application to application. The ESB evaluates the itinerary at each step along the way, without the requirement of a centralized rules engine. More complex orchestrations, including stateful, long-duration conversations, are also possible using orchestration facilities. An ESB architecture is well suited for supporting choreography languages such as BPSS or BPEL4WS... Web services have given newfound importance to service-oriented architectures and promise to drive down the cost of integration by providing a standards-based approach to interoperability between applications. The trouble is, what people really want is a new way of doing integration. Until now, we haven't really had a way to incorporate Web services into a meaningful architecture for integrating applications and services into a fabric that spans the extended enterprise in a large-scale fashion. With the advent of the enterprise service bus we have that architecture..." [alt URL]

  • [May 31, 2003] "Business Flows with BPEL4WS." By Doron Sherman (CTO, Collaxa). In Web Services Journal. May 2003. With 5 figures. "BPEL4WS is now moving rapidly into becoming the de facto standard for Web service orchestration with most platform vendors following in IBM and Microsoft footsteps after the submission of the specification to OASIS. This increased momentum and visibility will drive a great need for educating developers on how to put BPEL to work. This article illustrates a payment flow example coded using BPEL, highlighting some of the main constructs of the language and demonstrating how the flow can be monitored and managed once it is deployed. The example can be extended in various ways to include more advanced language constructs. Such extensions will be illustrated in subsequent articles... The PayFlow business flow example illustrated in this article is comprised of a client initiating a request to a BPEL process and ending with the process calling back the client with the result of the payment request (receipt). The process includes use of <receive> and <invoke> activities for interacting with the outside world which includes the client (request) and a partner (payment processor service). XML Variables are used for holding messages exchanged between the process and the partners. To make this example interesting, the payment processor service is asynchronous and can take anywhere from several minutes to several days before the service calls back the process. Another interesting element demonstrated by this example is the handling of exceptions and managing of timeouts. These constructs are instrumental to enable a BPEL process to deliver reliable business flows..." See also from Collaxa: Fast facts on BPEL4WS 1.1, BPEL 101 Tutorial, and Sample BPEL Scenarios. General references in "Business Process Execution Language for Web Services (BPEL4WS)."[alt URL]

  • [May 30, 2003] "The Differentiation of Web Services Security. How Can You Leverage Your Investment in Enterprise Security?" By Dave Stanton (Talking Blocks). In Web Services Journal Volume 3, Issue 6 (June 2003), pages 18-20. "Traditional applications are connection oriented, allowing many security details to be implemented at the connection level, and requiring a direct connection between the service provider and consumer. However, Web services are message oriented and lack the guarantee of a direct connection between service provider and consumer, so many traditional connection-oriented approaches to common security challenges are inappropriate or insufficient for a Web services security architecture. In addition, the introduction of Web services into an organization may be the first time that organization has ever allowed external users access to their business applications; the security practices and standards which are appropriate for inside use may be completely unacceptable when outside users are introduced... A Web service security architecture supports message confidentiality through: (1) Encryption of the message itself, and (2) Transportation-level confidentiality schemes. When a message is encrypted to ensure message confidentiality, any portion of it can be encrypted subject to the requirements of the encoding scheme used to include the encrypted data in the message. A Web service security architecture should support the XML-Encryption standard for encoding arbitrary encrypted data in an XML document, applied either through the WS-Security standard for encapsulating encrypted data in a SOAP message, or by arbitrarily replacing portions of the SOAP envelope (including the whole envelope as necessary) with an encrypted data element. When sending messages, a managed external service can generate WS-Security-encoded and encrypted message bodies, which ensures that the message itself is confidential. It may also replace the entire SOAP body with a single XML-Encryption-encoded element for communicating with external services that don't comply with the WS-Security standard. Either system, when combined with proper certificate management, will ensure complete confidentiality between message producer and message consumer... While many enterprises have made an investment in a public key infrastructure to complement their other security initiatives, for those that have not, the reliance on public key cryptography in a Web services architecture can be daunting. A Web service security architecture supports key management using (1) An internal key management facility; (2) An organization's existing XKMS service; (3) An organization's existing Java Key Store architecture. Because many organizations have existing public key infrastructures that provide them with an XKMS service deployed inside their organization, or a Java Key Store-based system that allows access to central registration and certificate authorities, a key management solution allows the use of either one of these types of enterprise key management integration. However, for organizations that don't have an existing public key infrastructure, but which want to use public key-based Web service security systems (such as XML Digital Signatures or Encryption), the organization should provide a key management facility internally that provides a centralized security realm based on certificate management, distributed private key management, and association of certificates and keys with user management facilities..." See general reference list in "Security, Privacy, and Personalization." [alt URL]

  • [May 30, 2003] "Mappers Write Cookbook 1.0 for Web Services." By Susan M. Menke. In Government Computer News (May 30, 2003). "The Open GIS Consortium Inc. has put together a batch of not-yet time-tested recipes in Cookbook 1.0, the first in a planned series about Web map services. The Wayland, Mass., consortium's Web Map Service interface specification makes a browser overlay -- a maplike raster-graphic image -- of multiple data layers from one or more distributed geographic information systems. The recipes tell users how to insert a WMS client into various commercial and open-source products such as ESRI ArcExplorer and ArcIMS, FreeBSD, Intergraph GeoMedia WebMap, Java deegree, Microsoft Internet Information Server and University of Minnesota MapServer. A WMS client can be an HTML page returned by a WMS server or a browser plug-in using Java or ActiveX to connect to different servers. The client can specify which layers to display from a specified geographic area, the file formats and the degree of transparency..." See details in the news story "OpenGIS Consortium Publishes Web Map Server Cookbook." General references in "Geography Markup Language (GML)."

  • [May 30, 2003] "GSA Chooses Web, XML Access to Data." By Jason Miller. In Government Computer News Volume 22, Number 12 (May 26, 2003). "The cost of reporting data to the Federal Procurement Data Center this fall will drop to less than $1 per transaction from a current average of $32. The General Services Administration expects to save the government about $20 million annually on about 500,000 transactions by revamping the Federal Procurement Data System. GSA late last month awarded Global Computer Enterprises (GCE) a five-year, $24.3 million contract for the upgrade. The deal includes incentive clauses to extend the contract if the Gaithersburg, Md., systems integrator performs well. GCE has tapped Business Objects Inc. of San Jose, Calif., IBM Corp. and Oracle Corp. as subcontractors. GSA evaluated 44 proposals and found GCE's to be the best value, said David Drabkin, GSA deputy associate administrator for acquisition policy. Drabkin was not on the source selection team but reviewed finalists' proposals. 'GCE didn't just talk about what they could do -- they showed us in a prototype,' Drabkin said. 'They also offered us a negative incentive if they didn't meet the Oct. 1 deadline. These two things were pretty significant.' He said the small business offered GSA $100,000 if it failed to get the new system up and running by October 1 [2003]. Under the GCE proposal, GSA will own the data and the company will retain ownership of the hardware and software... The high cost of maintaining the current system, developed in 1979 and updated in 1996, comes from the manual labor agencies and data center staffs must do, he said. Agencies send batch files of procurement information to a feeder system, where data entry personnel further format it. Then data center employees check for errors and enter it into the repository. To reduce costs, GCE will develop and implement a repository system that uses Web services and Extensible Markup Language to connect to agency legacy systems and transfer data, company president Ray Muslimani said... The upgraded system will provide specific information about purchases between $2,500 and $25,000. The current system gives only summarized reports about those buys... Muslimani said the developers must educate agencies about the new system. GCE will publish a directory of Web services codes and XML schemas that will show agencies how to integrate their systems with GSA's..." See the announcement: "Global Computer Enterprises Wins E-Government Contract from GSA. FPDS-NG Will Collect Procurement Data From Across the Federal Government in Real-Time."

  • [May 30, 2003] "The HumanML Initiative: Enabling Internet Communication in a Global Market." By Russell Ruggiero. From WSReview.com (May 29, 2003). "Human markup language (HumanML) is a specification being developed by the Organization for the Advancement of Structured Information Standards (OASIS). OASIS is a nonprofit consortium that advances electronic business by promoting open, collaborative development of interoperability specifications. In basic terms, HumanML has been designed to represent human characteristics through XML, and is focused on enhancing the fidelity of human communication. While the goals of HumanML may seem ambitious, the vision is very well timed for the dramatic changes currently taking place, such as the explosive Internet growth of the non-Western world. During the next ten years, accurate information exchange between people of different cultures and origins will be a growing concern pertaining to Internet communication. Accordingly, there will be a need for emerging technologies such as HumanML that will improve global communication while advancing the Internet to a higher level. HumanML is an always-evolving standard XML vocabulary designed to represent human characteristics through XML, while at the same time enhancing the fidelity of human communication... The Human Language Primary Base Specification 1.0 finished its 30-day public review December 12, 2002 and is being used to build sample implementations to support its case for approval as an OASIS Standard when it can field three member companies whose documented use can be submitted with the specification for formal approval voting... HumanML describes the XML and resource description framework (RDF) Schema specifications being developed by a designated technical committee at OASIS that contains sets of modules that frame and embed contextual human characteristics. Other significant efforts within the scope of the HumanML Technical Committee, which address the overall concerns of representing and amalgamating human information within data include: Alternative Schema Constraint Mechanisms; Human Physical Characteristics Description; Virtual Reality/Artificial Intelligence; Conflict Resolution applications; Messaging; Object Models; Repository Systems; Style... HumanML may be utilized in government and private projects to help improve information exchange between people of dissimilar cultures and origins. Case in point: HumanML could be used to assist government and private sectors in the Iraq reconstruction effort by enabling interested parties to gain a better understanding of one another, which will ultimately lead to improved information exchange. Thus, it may help to mitigate the incidence of misrepresentation while improving the overall efficiency of the reconstruction effort. In addition, the emergence of HumanML is particularly well- timed and positioned for the anticipated growth of non- Western Internet users in the new millennium. Simply put, HumanML can help bring down many of the current global communication barriers by acting as a bridge that allows people to express themselves in a more cohesive and coherent manner..."

  • [May 29, 2003] "OASIS to Develop Common Security Language." By Paul Roberts (IDG News Service). In InfoWorld (May 29, 2003). "A new committee at the Organization for the Advancement of Structured Information Standards (OASIS) is laying the groundwork for a new classification system to describe Web security vulnerabilities. The OASIS Web Application Security (WAS) Technical Committee will be responsible for developing an XML (Extensible Markup Language) schema that describes Web security conditions and provides guidelines for classifying and rating the risk level of application vulnerabilities... The new OASIS WAS standards will be similar to the list of Common Vulnerabilities and Exposures (CVE) that is used to standardize the description of network level vulnerabilities, said Wes Wasson, vice president of marketing at Netcontinuum in Santa Clara, Calif. Unlike the CVE list, however, WAS descriptions will tackle the thornier issue of describing application vulnerabilities that could be exploited using multiple avenues of attack, Wasson said. The announcement Wednesday follows the formation in April of a related technical committee to develop an XML definition for exchanging information on security vulnerabilities between network applications. The OASIS Application Vulnerability Description Language (AVDL) Technical Committee is intended to develop standards to deploy heterogenous but interoperable security technology relying on a standardized description of vulnerabilities. The work of the WAS Technical Committee will track closely with that of the AVDL Technical Committee, which will make sure diverse security products can work with the common vulnerability descriptions developed by the WAS group, Wasson said... OWASP plans to submit its Vulnerability Description Language (VulnXML), an open-standard data format for describing Web application security vulnerabilities, to the new committee, OASIS said. That standard should be quickly adopted by the OASIS WAS Technical Committee as its schema for describing attacks, Wasson said. That completed, the Committee will need to focus on the harder task of developing an infrastructure for responding to new vulnerabilities that are discovered. That infrastructure, like the one currently in place for the CVE list, will involve processes for collecting information about new vulnerabilities from companies and security researchers, developing descriptions for those vulnerabilities, then making that information public via a Web site such as the CVE site, which is managed by the nonprofit MITRE..." See details in: (1) the news story of May 13, 2003, "OASIS Members Form Web Application Security Technical Committee"; (2) the press release " "OASIS Works to Establish Classification Standards for Web Security Vulnerabilities." General references in "Application Security".

  • [May 27, 2003] "XMPP vs SIMPLE: The Race For Messaging Standards." By Cathleen Moore. In InfoWorld (May 27, 2003). ['As IM bounds ahead in the enterprise, a behind-the-scenes battle is taking place between competing IETF standards.'] "There's a race on for the future of the enterprise messaging system. The contestants are backing competing protocols for IM and presence awareness. Which standard takes home the prize may depend less on technical merits than on brute force. At the head of the competition are SIMPLE (Session Initiation Protocol for Instant Messaging and Presence Leveraging Extensions) and the open-source, XML-based XMPP (Extensible Messaging and Presence Protocol). Both are currently being developed by the Internet Engineering Task Force (IETF). SIMPLE backers extol the broad media possibilities of a SIP-based signaling protocol with natural affinities for voice, video, and conferencing. XMPP proponents, on the other hand, tout an XML-based data transport technology that is built to manage IM and presence. SIMPLE is a set of extensions to the established SIP protocol that initiate, set up, and manage a range of media sessions, including voice and video. SIMPLE extensions define SIP signaling methods to handle the transport of data and presence. SIMPLE's designers set out to develop a system that represented the communications state as broadly as possible, supporting presence not just for PC messaging applications but also for devices such as phones and PDAs, says Jonathan Rosenberg, chief scientist at Parsippany, N.J.-based dynamicsoft and co-author of SIP and SIMPLE. 'We realized a long time ago that presence and IM [are] just another facet of communications, and that is what SIP is all about. IM is just like voice and video; it is another aspect of real-time, person-to-person communications,' Rosenberg says. SIMPLE's capability of unifying voice, video, and data messaging appealed greatly to Microsoft, according to Ed Simnett, lead product manager of RTC (Real Time Communications) Server at the Redmond, Wash.-based software giant. According to observers, one potential problem with SIMPLE is that it is a paging protocol meant to perform signaling but not to carry anything else... Because of the inherent limitations of SIP and because many SIMPLE extensions are still under construction, the existing implementations of the protocol from Microsoft and IBM have included proprietary extensions. Furthermore, SIMPLE is missing core IM-related functionality such as contact lists and group chat capabilities, according to observers. Another potential pitfall with SIMPLE is that SIP uses both TCP and UDP (User Datagram Protocol) as transport layers. TCP includes congestion control, whereas UDP does not, thereby opening the door for packet loss during times of network congestion... Meanwhile, proponents of XMPP contend that an XML-based data-transport technology is better suited than a signaling technology to handle IM and presence. According to its designers, one major benefit of XMPP is that it can be extended across disparate applications and systems because of its XML base..." See: (1) "Extensible Messaging and Presence Protocol (XMPP)"; (2) IETF SIMPLE Working Group; (3) the related news story "IETF Publishes Internet Drafts for XML Configuration Access Protocol (XCAP)."

  • [May 27, 2003] "XML Transactions for Web Services, Part 3." By Faheem Khan. From O'Reilly WebServices.xml.com (May 27, 2003). ['Part 3 in a series explaining XML-based transactions. Faheem Khan describes Business Activities, the next layer up from atomic transactions, which encaspsulates long-lived collections of transactions.'] "In the first article of this series I introduced the idea of XML-based transactions. In the second article, I described an Enterprise Application Integration (EAI) scenario which was meant to demonstrate XML messaging in an Atomic Transaction (AT). I demonstrated how the sales module of a PC assembler interacts with inventory, database, assembly line management and accounts modules to fulfill a bookOrder method invocation request. In this third and final article of the series, I'll discuss another type of XML transaction, the Business Activity (BA). The BA specification is included in the same WS-Transaction document that includes the AT specification. I will extend the AT discussion of the second article to establish a case for BA and demonstrate how BA enables XML-based transactions to cross the boundaries of an enterprise and allows humans to interact with software modules and participate in real world business tasks... Both AT and BA represent coordinated activities. AT is an all-or-nothing activity, while BA is flexible. I have demonstrated the use of BA on a higher level to divide the entire business logic as a hierarchy of small and easier to manage tasks. [And] I have shown how you can represent the smallest unit of a coordinated activity as an AT..."

  • [May 27, 2003] "Caching XML Web Services for Mobility." By Douglas Terry (Microsoft Research, Silicon Valley Lab) and Venugopalan Ramasubramanian (Cornell University). In ACM Queue Volume 1, Number 3 (May 2003), pages 71-78. ['Disconnected operations will likely be a problem for wireless environments for some time to come. Can local HTTP caching provide a way around such outages?'] "One of the key requirements for the success of Web services is universal availability. Web services tend to be accessed at all times and in all places. People use a wide range of devices including desktops, laptops, handheld personal digital assistants (PDAs), and smartphones that are connected to the Internet using very different kinds of networks, such as wireless LAN (802.11b), cellphone network (WAP), broadband network (cable modem), telephone network (28.8-kbps modem), or local area network (Ethernet). Occasional to frequent disconnections and unreliable bandwidth characterize many of these networks. The availability of Web services is thus a significant concern to consumers using mobile devices and working in different kinds of wireless and wired networks... A good solution would be applicable to all Web services and would involve interposing storage and computation transparently in the communication path of the client and the server without modifications to Web-service implementations on the client or the server... To study the suitability of caching to support disconnected operation on Web services, we conducted an experiment in which a caching proxy was placed between Microsoft's .NET My Services and the sample clients that ship with these services. The .NET My Services were chosen for this experiment because, although they are not commercial services, they were publicly available at the time of the study, well documented, and are representative of non-trivial XML Web services that support both query and update operations... Extensions to the WSDL standard are needed to customize proxy-based or application-embedded cache managers based on the semantics of specific Web services. Additional information placed in a Web service's WSDL description could indicate which operations are updates and which are cacheable, thereby increasing the effectiveness of caching schemes. For example, one might annotate the WSDL operation elements, which are used to specify the request and response formats for each operation exported by a Web service. Such annotations extend the description of the Web service's interface. These annotations would not affect tools that automatically generate Web-service clients from WSDL specifications, but are simply used to adjust the behavior of cache managers. Annotations can be added to the WSDL description without requiring any modifications to the implementation of the Web service. These annotations could either be published by the service provider or by a third-party provider reasonably aware of the semantics of the Web service. The full set of WSDL extensions needed to facilitate caching remains to be explored and standardized. Extended WSDL specs would allow client-side caches to provide better cache consistency and, hence, a more satisfying mobile user experience. The designers of future Web services should produce extended WSDL specifications that allow effective caching on both mobile and stationary devices for both disconnected operation and increased performance. Development tools driven by WSDL specs could then aid in the construction of mobile applications, a notoriously difficult endeavor. The ultimate goal is to provide seamless offline-online transitions for users of mobile applications that use emerging Web services..."

  • [May 27, 2003] "XMLSec 1.0 Helps Ensure Multi-Platform WS-Security." By Vance McCarthy. In Integration Developer News (May 27, 2003). Developers "have one more assurance that they will be able to implement WS-Security across multiple platforms. The XML Security Library, an Open Source implementation of W3C's XML Digital Signature and XML Encryption (the core components of WS-Security), has released its major 1.0 release to developers. It includes support for multiple crypto engines, as well as OpenSSL, GnuTLS and NSS. The release of XMLSec 1.0 comes as the WS-I (Web Services Interoperability Organization) has launched a Basic Security Working Group to ensure that WS-Security implementations will operate across multiple and diverse platforms -- .Net, Java and legacy systems... The XML Security Library is based on an Open Source C library implementation. The 1.0 release, directed by Aleksey Sanin, includes multiple crypto engines support (with 'out of the box' support for OpenSSL, GnuTLS and NSS); simplified and cleaned internal structure and API; several performance and memory usage improvements; and new or updated documentation (tutorial, API reference manual and examples)... Developers can also obtain an XML Security Library XML Signature Interoperability Report that describes how XMLSec works with OpenSSL, GnuTLS and NSS..." See also the FAQ document.

  • [May 27, 2003] "Q&A: Walter Hamscher, XBRL International." By Erin Joyce. From InternetNews.com (May 23, 2003). Interview. "You could say that PricewaterhouseCoopers consultant Walter Hamscher began his work with the XML standards consortium XBRL International more than a few years ago while he was working on his PhD in computer science at MIT. He was working with artificial intelligence and needed some data to feed the programs, but was frustrated by the lack of quality in formatting of the financial data available to him. Years later he would help write the first version of XBRL, or extensible business reporting language, which is a subset of XML, the fundamental building block behind the Web services movement to standardize and enable machines to share and interpret data. As the organization explains, XBRL uses XML data tags to describe financial information for public and private companies and other organizations. XBRL International and other standards bodies work together to produce specifications and taxonomies that anyone can license for use in applications. XBRL is also available to license royalty-free worldwide from XBRL International. These days, Hamscher spends every waking hour talking, walking, breathing and preaching his message about how XBRL will revolutionize not only how we use and share financial data, but how that data will improve how money makes the world go round... [He says:] 'XBRL International has over 200 members around the world working on XBRL, which is an XML standard that is meant to facilitate the transfer of financial information along the business information supply chain. The idea is by creating a common framework, a common XML standard, you can accelerate the flow of information to investors and other users... XBRL creates a common language in which a data point like a customer number or employee ID, or amount or currency, are always called the same thing and so an app that needs to consume that (data) will know what to do with that... With XBRL general ledger standards, for example, there are well over 50 different data items in order to capture what are called journal entries in an accounting system. A journal entry might be, say, a $1,000 credit to an account... In the U.S., we have a GAAP (generally accepted accounting principles) taxonomy, which is what we call a whole collection of concepts encoded in XBRL, with well over 1,000 different data items. It's a very large task, something that's been in development for over two years, but it's nearing completion. Right now, there's really not a piece of software that can go and extract the revenue recognition policy of one company at a given point, and then go and get the corresponding policy at another point. There is no reliable way to do that. XBRL makes that possible. The reason that's important is because so much of what goes on in financial reporting isn't really about the numbers, but about the explanation that sits behind the numbers. If I tell you I have receivables of $1 million, you might ask, well, is that $999,999 from one customer? That's a very important question in order to understand the real value of those receivables. So the development of XBRL is a base language on which these richer taxonomies of terms are then built. There are dozens of these taxonomies under development or already published. In Japan, for example, the (main) stock exchange is now accepting financial information in XBRL format, and they have a taxonomy that represents Japanese accounting standards. At the FDIC (Federal Deposit Insurance Corporation), they're developing a taxonomy for bank reporting, Britain's version of the IRS has one for British tax filings, and so on. Then we have the largest of the taxonomies, the US GAAP taxonomies and what are called the international accounting standards (IAS), which are basically the accounting standards used by the rest of the world other than the US. It has its own very large taxonomies. The point is that XBRL is a base language with an emerging large number of taxonomies built in XBRL in order to capture different accounting standards, just like the IT industry has its own, such as RosettaNet'..." General references in "Extensible Business Reporting Language (XBRL)."

  • [May 27, 2003] "XML Data Management: Information Modeling with XML. Book Excerpt Offers Guidelines for Achieving Good Grammar and Style." By Chris Brandin (Chief technology officer, NeoCore). From IBM developerWorks, XML zone. May 27, 2003. ['An excerpt from Chapter 1 of the book XML Data Management: Native XML and XML-Enabled Database Systems, co-edited by developerWorks editor Akmal Chaudhri. Addison-Wesley Professional; ISBN 0-201-84452-4.'] "When XML first came into use, it was seen primarily as a data interchange standard. Since then it has come to be used for more and more things -- even serving as the core for development and deployment platforms such as Microsoft's .NET. Increasingly, XML has become the means to model components of information systems, and those components automatically construct themselves around what has been expressed in XML. This represents the real potential of XML -- the ability to model the behavior of an entire application in XML once, instead of repeatedly in different ways for each component of an application program. As long as XML was used as a container for data managed by legacy systems, it was sufficient to consider only syntax when building documents. Now that XML is being used to do more than simply express data, it is important to consider grammar and style as well. Obviously, proper syntax is necessary for parsers to be able to accept XML documents at all. Good grammar insures that once XML information has been assimilated, it can be effectively interpreted without an inordinate need for specific (and redundant) domain knowledge on the part of application programs. Good style insures good application performance, especially when it comes to storing, retrieving, and managing information. Proper XML syntax is well understood and documented, so that topic will not be discussed here. This chapter does not discuss how to build XML schemas or DTDs, as they are also well documented elsewhere. This chapter is intended as a practical guide to achieving good grammar and style when modeling information in XML -- which translates to building flexible applications that perform well with minimal programming effort. Grammar is often regarded as being either right or wrong. True, there are 'wrong' grammatical practices; but past that, there is good grammar and bad grammar -- and everything in between. Arguably, there is no such thing as wrong style, only a continuum between the good and the bad... One of the most promising things about XML, and the new breed of tools built on it, is that we can build applications that are driven by a single information model rather than multiple data models accommodating each application function. We can change the behavior and functionality of application programs by changing the underlying XML rather than by changing code. Additionally, we can optimize performance by changing the way information is expressed. Even in environments not fully leveraging XML as a central information model, it is important to design good XML for the sake of readability and maintainability. Building good applications efficiently requires that we learn not only to use XML correctly, but that we learn also to use it well..."

  • [May 27, 2003] "Introducing XUL - The Net's Biggest Secret: Part 1." By Harry Fuecks. From SitePoint (May 21, 2003). Part 1 of a 3-part series. ['What if I was to tell you that you can write your own version of Word using something like HTML and JavaScript? What if I added that you could run on your hard disk or launch it directly from your Web server and use it to update your site's content? It sounds a little far fetched, I know, but it's right here, right now -- and it calls itself "Zool".'] " In one of the Internet's quieter corners, mozilla.org, a revolution has been taking place. A new XML format, called XUL (Extensible User Interface Language), pronounced Zool is on the way to re-shaping what we know about both the Internet, and desktop applications. A bold claim perhaps -- but once you've finished reading this, you may just find yourself agreeing. The conceptual leap that's taken place at Mozilla is to think beyond the Web browser as simply being a tool for viewing Web pages, and instead, to look at it as a framework -- a runtime environment for executing applications, just as you might run programs in the Java and .NET runtime environments... What's fascinating about XUL and its sister technology, XPCom (Cross Platform Component Object Model), is that they have all the hallmarks of .NET and Java: (1) a library of 'widgets' for building applications -- as with .NET WinForms and Java's Swing; (2) separation of presentation logic from application logic, the presentation logic being handled by JavaScript; (3) support for XML messaging protocols like SOAP and XML-RPC; (4) support multiple languages for building 'code behind' components including C++, Python and Ruby, with Perl in progress -- though, sadly, no PHP yet; (5) truly cross platform; anywhere you can run Mozilla, you can run your XUL/XPCom applications. But it doesn't stop there -- writing an application in XUL is like writing a Web page with DHTML, except that your XUL application will work, while your DHTML might.... XUL provides a markup that will be easy for anyone with HTML experience to pick up, and has all the advantages of a text-based markup language, such as being able to generate it 'on the fly' with PHP. Better yet, XUL allows for the use of existing technologies, such as CSS, to modify the look and feel of your XUL applications, and SVG, to add some visual flair. You can also mix HTML with XUL -- you can put together hybrid pages, to, for example, bring a boring HTML page to life with some XUL widgets... All you need in order to run XUL applications, is to have Mozilla installed, right? Well, almost. There are a number of projects that aim to bring XUL to other runtimes and environments, such as Mozilla's Blackwood Project, Luxor XUL, jXUL and XULUX, all of which bring XUL to Java in some manner. It's also possible to Embed Gecko, which could bring XUL to devices like mobile phones and PDAs..." With source code. See also Part 2. General references in: (1) "Extensible User Interface Language (XUL)"; (2) "XML Markup Languages for User Interface Definition."

  • [May 27, 2003] "WSDL Tales From The Trenches, Part 1." By Johan Peeters. From O'Reilly WebServices.xml.com (May 27, 2003). ['Web services are still in their infancy as a technology, and there's relatively little practical experience available to draw on. Johan Peeters describes best practices and common errors he has discovered while deploying web services in the field.'] "Recently I retrofitted WSDL to a set of existing web services. A customer had a server running and there was a client implementation. The client and server team had been working closely together and now the time had come for another client implementation by a development team on the other side of the globe. A clear specification of the services was needed, and that's what WSDL is for. So I set out to make explicit what was previously implicit. It turned out to be an instructive experience, reaffirming some good old software engineering practices and uncovering a new set of problems specific to web services, WSDL, and XML Schema. There were clearly some design flaws at the outset which were hard to pinpoint. It is likely that these mistakes would not have been made if the designers had formally written down their service definitions in WSDL. So this is the core message of the series: write WSDL up front, do not generate it as an afterthought, as is often suggested by vendors... This series will not provide an overview of WSDL; it also assumes familiarity with W3C XML Schema. This first article in the series considers what sound software engineering practice and distributed computing experience offer to web service design. I review some of the important design decisions that a web service designer must make and offer some advice to guide the process. The rest of the series is about how to represent the design. In the second article I shine a light in some of the dark corners of the WSDL 1.1 specification, leaving out data type definitions, which are the subject of the third article. I look at WXS it from the perspective of someone who uses it to specify the data which will be sent across the web service interface. ... Ideally, when designing web services, you should not have to worry about implementation issues. The whole point of design is that you look at the system at a high level of abstraction, not allowing the implementation complexities to befuddle you. But you can make all the right design choices and the design will be useless if the tool chains that are going to be used do not support the constructs in your WSDL. I strongly recommend prototyping the design specification as a reality check. This edge is still raw..." See also Part 2. General references:

  • [May 27, 2003] "All Consuming Web Services." By Erik Benson. From O'Reilly WebServices.xml.com (May 27, 2003). ['Erik Benson describes his work on All Consuming, a web application built on top of the free services offered by weblogs.com, blo.gs, Google, and Amazon. It's an interesting example of the value that can be created by combining multiple web service components from completely independent sources.'] "By taking small steps, first consuming information from multiple web services, and then exposing newly processed information via your own web services, we can begin to build complex applications in our spare time, with very few resources required up front. All Consuming is one such application that's built on top of free services offered by weblogs.com, blo.gs, Google, and Amazon; it offers an interesting slice of the book life that exists on the Web and in the world. None of these services were built with All Consuming in mind, and yet each one plays a crucial role in supporting All Consuming, and benefits from doing so... All Consuming, inspired by Paul Bausch's work with Book Watch, is a site dedicated to providing interesting book lists. I wanted to know what people on the Web were reading without explicitly asking them. I didn't want to know what people were buying, necessarily, nor what editors thought they should be reading, but what people were actually reading, actually talking about, and actually engaged by... Here's how All Consuming works. Every hour a Perl script checks Weblogs.com's Recently Updated list for the weblogs that were updated during the last hour. Since the Recently Updated list is available in XML, the script is able to separate the information that it needs from the information it doesn't need very easily. The information it needs is simply a list of URLs, each of which the script then visits and reads. When reading each site that it visits, it looks for text that matches a certain pattern that signifies a link to Amazon, Book Sense, Barnes & Noble, or even All Consuming. It doesn't matter to the script which site you link to, since they all use ISBNs (International Standard Book Numbers) as book IDs in their URLs. Upon finding a link it recognizes, it saves the ISBN along with an excerpt of the paragraph or so of text that the link appeared in... The data is out there, stored digitally on weblogs and other sites all over the Internet, just waiting to be looked at. The data is accessible via standard HTML markup and web services like SOAP and XML, making it easily processed and interpreted by simple machines and scripts that anyone can write. Finally, the data is interesting: it gives us a glimpse of the patterns and trends that emerge out of the collective activity of the entire group, bypassing the traditional necessity of trusting a few voices to represent the many. It is truly a model for distributed idea generation and interpretation which is only beginning to be tapped. All Consuming is a tiny filter on top of this vast collection of group activity, aimed solely at finding connections between weblogs and books, but I look forward to the day when hundreds of other views of the data are available to consume and build upon..."

  • [May 27, 2003] "Adobe Ships New Acrobat Versions." By David Becker. From CNET News.com (May 27, 2003). "Software maker Adobe Systems released on Tuesday new versions of its Acrobat electronic-publishing software. As previously reported, version 6 of Acrobat splits the software into several versions, targeting different classes of publishing professionals and regular office workers. The new Acrobat Elements is a light-duty version of the software that's intended to let ordinary office workers easily convert documents into Adobe's widespread Portable Document Format (PDF). Adobe, which had mainly dealt with boxed software, will only sell Acrobat Elements to businesses under volume licensing plans for at least 1,000 licenses, priced at $29 per license. Acrobat Professional is a high-end version of the software for engineers, architects and others who need to produce PDF files from complex documents created with applications such as AutoCAD drafting software. Acrobat Professional sells for $449, or $149 for those upgrading from a previous edition of Acrobat. Acrobat Standard is the basic version of the software, upgraded to include new functions -- based on Extensible Markup Language (XML) -- that turn PDF files into interactive forms that can exchange data with corporate databases. Acrobat Standard sells for $299, or $99 for the upgrade version... Although the Reader software for viewing PDF files is one of the most widely distributed applications in computing, Adobe has only recently begun to try to capitalize on its position. The company last year launched new products that make PDF a presentation layer for viewing and sharing corporate data, and it forged alliances with enterprise software leaders such as SAP to better integrate PDF with existing business processes..." See references in "Adobe Announces XML Architecture for Document Creation, Collaboration, and Process Management."

  • [May 26, 2003] "XML Matters: Kicking Back with RELAX NG, Part 3. Compact Syntax and XML Syntax." By David Mertz, Ph.D. (Facilitator, Gnosis Software, Inc). From IBM developerWorks, XML zone. May 14, 2003. See also Part 1 and Part 2 in the series 'Kicking Back with RELAX NG'. ['The RELAX NG compact syntax provides a much less verbose, and easier to read, format for describing the same semantic constraints as RELAX NG XML syntax. This installment looks at tools for working with and transforming between the two syntax forms.'] "Readers of my earlier installments on RELAX NG will have noticed that I chose to provide many of my examples using compact syntax rather than XML syntax. Both formats are semantically equivalent, but the compact syntax is, in my opinion, far easier to read and write. Moreover, readers of this column in general will have a sense of how little enamored I am of the notion that everything vaguely related to XML technologies must itself use an XML format. XSLT is a prominent example of this XML-everywhere tendency and its pitfalls -- but that is a rant for a different column. Later in this article, I will discuss the format of the RELAX NG compact syntax in more detail than the prior installments allowed... On the downside, since the RELAX NG compact syntax is newer -- and not 100% settled at its edges -- tool support for this syntax is less complete than for the XML syntax. For example, even though the Java tool trang supports conversion between compact and XML syntax, the associated tool jing will only validate against XML syntax schemas. Obviously, it is not overly difficult to generate the XML syntax RELAX NG schema to use for validation, but direct usage of the compact syntax schema would be more convenient. Likewise, the Python tools xvif and 4xml validate only against XML syntax schemas. To help remedy the gaps in direct support for compact syntax, I have produced a Python tool for parsing RELAX NG compact schemas, and for outputting them to XML format. While my rnc2rng tool only does what trang does, Eric van der Vlist and Uche Ogbuji have expressed their interest in including rnc2rng in xvif and 4xml, respectively. Ideally, in the near future direct validation against compact syntax schemas will be included in these tools... In some corner cases, rnc2rng differs from trang. For example, both tools force an annotation to occur inside a root element in XML syntax, even if the annotation line occurs before the root element in the compact syntax. Since well-formed XML documents are single-rooted, this is a necessity. But trang also moves comments in a similar manner, while rnc2rng does not. At a minimum, the two tools use whitespace in a slightly different manner. Most likely, a few other variations exist, but ideally none that are semantically important..." Article also in PDF format. See the column listing for other articles in 'XML Matters'. General references in "RELAX NG."

  • [May 26, 2003] "Government to Set Up ebXML-RosettaNet Link." By Kim Joon-bae. In Korea IT Times (May 20, 2003). "Two next-generation international e-commerce standards, 'ebXML' and 'RosettaNet,' will soon become interoperable, allowing the nation to take a leading role in linking the two mainstay standards on the global level. The Ministry of Commerce, Industry and Energy said on May 18, 2003 that it would make concerted efforts with the Korea Institute for Electronic Commerce, an organization leading ebXML development, and RosettaNet Korea to develop an adapter used to link the two standards. The decision came after a meeting organized by RosettaNet Korea and attended by MOCIE and KIEC, and the project will become a part of the 'RosettaNet-ebXML link plan' proposed by RosettaNet. Once the adapter is completed, firms in the electronics and electrics industries using RosettaNet will be able to engage in e-commerce activities with firms poised to adopt ebXML, which is in a final development stage before commercial use. Given the fact that ebXML is a global e-commerce standard and RosettaNet has established a foothold as an industry standard at home, interoperability between the two standards will contribute significantly to boosting the nation's role in link on the global level... The ministry's decision is expected to give sense of stability to firms using RosettaNet by enabling interoperability with the fast-spreading international standard ebXML. It will also likely serve as a catalyst for adoption of ebXML in the electronics and electrics industries. The government has been providing full support for adoption of ebXML even before a commercial launch, citing its general-purpose use across a full variety of industries... 'Interoperability will not lead to immediate effects since ebXML is still in a development stage, but the decision carries significance in that the government has begun to provide support for RosettaNet, which is fast spreading into the electronics and electrics areas,' said Prof. Kim Sun-ho at Myungji University..." General references in: (1) "RosettaNet"; (2) "Electronic Business XML Initiative (ebXML)."

  • [May 26, 2003] "APIs, Protocols, And Rogue Plumbers. Who Should Unclog the Web Pipes to Keep Information Flowing?" By Jon Udell. In InfoWorld (May 23, 2003). "My local bank is switching from one online bill-payment system to another. I'm looking forward to the new system, which will be an improvement on the current one, but I wasn't expecting this: IMPORTANT NOTICE FOR CURRENT BILL PAYMENT CUSTOMERS: When we switch to this new system you will be required to re-enter the payee data you have set up under our current system.... Integrating two Web-based systems, if only by brute force, is not only necessary but possible. If I can log in to both systems and drive them interactively, I can write a script to join them programmatically. Every Web application can be tortured into behaving like a Web service, even when its creator never intended it to... Web services exist so we won't have to unclog the pipes this way any more. HTML screen-scraping just feels like the dirty job it always was. The technical superiority of the Web services approach is well understood, but I don't think we fully grasp the political impact...The political hot potato here is a software tool that implements a private API. Use of the API is contingent on access to the tool because, well, that's how we've always done things. But not for much longer, I hope. The inexorable logic of Web services sets aside APIs in favor of protocols. XML messages flowing through the pipes enact those protocols. Anyone with authorized access to that plumbing will be able to monitor and inject messages quite easily, and everyone will know that's true..."

  • [May 26, 2003] "XHTML is the Most Important XML Vocabulary." By Kendall Grant Clark. From XML.com (May 21, 2003). "Taking the long view of recent technology, XHTML may be the most important XML vocabulary ever created. What I mean is not that XHTML will be the most widely deployed XML vocabulary, though if we take the long view, it could be. What I mean is that XHTML puts XML's reputation -- and, by extension, the W3C's reputation -- on the line to a greater degree than any other XML vocabulary... A reasonably computer-literate person can still learn to create XHTML 1.1 documents with reasonable effort and within a reasonable time. Even if it takes a week of evenings to become comfortable with the main features of XHTML, that's a small investment to make for a relatively big return. The Web's success, then, is due in part to the simplicity and generality of HTML. The ongoing success of the Web will be in part a function of maintaining a positive balance between how difficult and how empowering it is to learn XHTML. Some form of HTML, eventually XHTML, will always be the most common type of Web content; people will keep writing it by hand, building user interfaces with it, trying, succeeding, failing to scrape useful information from it, and so on. Any part of the Web's infrastructure with such a long future life cycle deserves careful, attentive, community shepherding... The May 2003 draft "displays evidence that community feedback can make a difference to the development of a specification... Perhaps the most welcome development, particularly from the perspective of XML-DEV geeks, is the appearance of a normative RELAX NG schema for XHTML 2.0. This development is welcome because it signals a growing acceptance of RELAX NG -- a non-W3C schema specification language -- within the working groups of the W3C. It is also welcome because XHTML is among the most document-centric of all XML vocabularies, and having RELAX NG's fittingness for such vocabularies on display is a good thing... I am very pleased to report that the latest XHTML 2.0 draft contains a provision for a caption element, which may reside within either table or object elements. I applaud this rational, simplifying, and long overdue addition. There is more than enough evidence of the utility and need for exactly this sort of addition. XHTML 2.0 is headed in the right direction, even if you're among those who think that, for example, the style attribute should die a horrible death. Sometimes W3C working groups do not have much of an active user community with which to have dialog about its work. But in those lucky cases where there is such a community, working groups do well to pay careful attention to what they want and say. This general rule is even more important in the case of XHTML. Despite the widespread pessimism about XHTML's deployment, it is far, far too important to be left in the hands of a working group alone..." Note: A fifth public Working Draft of XHTML 2.0 has been published: XHTML 2.0, W3C Working Draft 6-May-2003. This version includes (Appendix B) an early implementation of XHTML 2.0 in RELAX NG.

  • [May 26, 2003] "XHTML 2.0." W3C Working Draft 6-May-2003. Edited by Jonny Axelsson (Opera Software), Beth Epperson (Netscape/AOL), Masayasu Ishikawa (W3C), Shane McCarron (Applied Testing and Technology), Ann Navarro (WebGeek, Inc), and Steven Pemberton (CWI - HTML Working Group Chair). Latest version URL: http://www.w3.org/TR/xhtml2. See also the diff-marked version, single XHTML file, PDF, and XHTML in ZIP archive. "XHTML 2 is a general purpose markup language designed for representing documents for a wide range of purposes across the World Wide Web. To this end it does not attempt to be all things to all people, supplying every possible markup idiom, but to supply a generally useful set of elements. It provides the possibility of extension using the span and div elements in combination with stylesheets... XHTML 2 is a member of the XHTML Family of markup languages. It is an XHTML Host Language as defined in XHTML Modularization. As such, it is made up of a set of XHTML Modules that together describe the elements and attributes of the language, and their content model. XHTML 2 updates many of the modules defined in XHTML Modularization 1.0, and includes the updated versions of all those modules and their semantics. XHTML 2 also uses modules from Ruby, XML Events, and XForms. The modules defined in this specification are largely extensions of the modules defined in XHTML Modularization 1.0. This specification also defines the semantics of the modules it includes. So, that means that unlike earlier versions of XHTML that relied upon the semantics defined in HTML 4, all of the semantics for XHTML 2 are defined either in this specification or in the specifications that it normatively references. Even though the XHTML 2 modules are defined in this specification, they are available for use in other XHTML family markup languages. Over time, it is possible that the modules defined in this specification will migrate into the XHTML Modularization specification... This document is the fifth public Working Draft of this specification. It should in no way be considered stable, and should not be normatively referenced for any purposes whatsoever. This version includes an early implementation of XHTML 2.0 in RELAX NG, but does not include the implementations in DTD or XML Schema form. Those will be included in subsequent versions, once the content of this language stabilizes. This version also does not address the issues revolving around the use of XLINK by XHTML 2. Those issues are being worked independent of the evolution of this specification. Those issues should, of course, be resolved as quickly as possible, and the resolution will be reflected in a future draft. Finally, the working group has started to resolve many of the issues that have been submitted by the public. If your particular issue has not yet been addressed, please be patient - there are many issues, and some are more complex than others..." General references in "XHTML and 'XML-Based' HTML Modules."

  • [May 26, 2003] "The XML.com Interview: Steven Pemberton." By Russell Dyer. From XML.com (May 21, 2003). "At the top of the HTML hierarchy stands Steven Pemberton, chair of the HTML working group of the World Wide Web Consortium (W3C). A lover of language, a writer, and an editor, as well as an organizer and a leader in the web community, he has had both subtle and profound influences over the Web, not only in HTML standards, but in concepts that permeate the Web. He has been at the center of the forces that have been guiding the Web for over a decade... I asked Pemberton for his thoughts about the future of HTML: 'XHTML is now being implemented, and implemented well, in many different browsers, and popping up in lots of unexpected places like cellular telephones, televisions, refrigerators and printers. In some ways an impediment to the full adoption of XHTML is IE (Microsoft Internet Explorer), since its HTML and XML engines are not well integrated. However, there there are some things coming up that make me think IE will evolve into a shell that won't be doing much processing itself. It'll just be a way of combining different markup languages via plugins,' says Pemberton. As for the Web itself, he says, 'I think that we're only at the early stages of the development of the Web. In a sense, it has disappointed me that it has gone so slowly, but I've learned to live with that slow movement. I believe there are some advantages to it in that people are reviewing these things and we're getting a lot of community buy-in of what's being done.' With leaders like Steven Pemberton, I agree that we are probably in the early stages of the Web and can expect to see spectacular developments in time..."

  • [May 24, 2003] "Demystifying XML in Microsoft Office 2003." By Tim Hickernell, John Brand, Mike Gotta, David Yockelson, Andy Warzecha, and Steve Kleynhans (META Group). META Group News Analysis. May 15, 2003. "As part of its upcoming Microsoft Office 2003 release, Microsoft has announced various XML enhancements to Office applications as well as a new XML document and lightweight forms authoring tool called InfoPath (formally XDocs). These enhancements, along with the InfoPath application, will be bundled only with the Enterprise Edition of Office 2003 Professional, but InfoPath will also be available as a standalone product. Because sales of Office 2003 are largely a moot point, following the success of Microsoft's new licensing programs in 2002 (approximately two-thirds of companies have Office on maintenance), and these new enhancements are available only in this "pre-sold" enterprise version, we do not view XML enhancements to Office 2003 as an attempt by Microsoft to boost sales of Office 2003. Rather, the changes will prepare the Office suite ahead of time for inevitable XML enhancements in upcoming Microsoft server product releases, specifically under the Yukon, Jupiter, and Longhorn code names... Custom schemas and data can then be saved in XML or written to other applications via Web services or Microsoft's ADO. If additional context is desired, the full form or document structure can be appended in a Microsoft Office XML format (e.g., InfoPath XML, WordML, SpreadsheetML). In fact, XML formats for Office binaries exist in the current version of Office, but they have been misconstrued as XML interchange formats, which has contributed to the confusion about what the real role of XML in Office Microsoft is. With few XML-enabled enterprise applications deployed that can exploit the XML enhancements to Office 2003, and an expected slow enterprise deployment of Office 2003, we believe the inclusion of these features in Office 2003 is more about Microsoft preparing Office for a future role as an XML-capable front end to any back-office system (Microsoft and non-Microsoft) than about adding immediate value. To date, Microsoft's demonstrations of XML in Office 2003 have lacked sufficient focus to demonstrate ROI and, at times, even suggest that InfoPath forms and documents should be used as replacement user interfaces for existing enterprise applications (e.g., CRM, ERP) without regard to existing investment in and customization of these systems..." General references in "Microsoft Office 11 and InfoPath [XDocs]." [alt URL]

  • [May 24, 2003] "Web Services Internationalization Usage Scenarios." W3C Working Draft 16-May-2003. Edited by Kentaroh Noji (IBM), Martin J. Dürst (W3C), Addison Phillips (webMethods), Takao Suzuki (Microsoft), and Tex Texin (XenCraft). Latest version URL: http://www.w3.org/TR/ws-i18n-scenarios. Also available in non-normative XML format. " Produced by the Web Services Internationalization Task Force of the W3C Internationalization Working Group, as part of the W3C Internationalization Activity. "The goal of the Web Services Internationalization Task Force is to ensure that Web Services have robust support for global use, including all of the world's languages and cultures... this document examines the different ways that language, culture, and related issues interact with Web Services architecture and technology. Ultimately this will allow us to develop standards and best practices for those interested in implementing internationalized Web Services. We may also discover latent international considerations in the various Web Services standards and propose solutions to the responsible groups working in these areas. Web Services provide a world-wide distributed environment that uses XML based messaging for access to distributed objects, application integration, data/information exchange, presentation aggregation, and other rich machine-to-machine interaction. The global reach of the Internet requires support for the international creation, publication and discovery of Web Services. Although the technologies and protocols used in Web Services (such as HTTP - RFC2616, XML, XML Schema, and so forth) are generally quite mature as 'international-ready' technologies, Web Services may require additional perspective in order to provide the best internationalized performance, because they represent a way of accessing distributed logic via a URI. As a result, this document attempts to describe the different scenarios in which international use of Web Services may require care on the part of the implementer or user or to demonstrate potential issues with Web Services technology. This document describes the followings scenarios: (1) Locale neutral vs. locale-sensitive XML messages and data exchange; (2) Interaction between Web services and the underlying software system's international functionality; (3) Message processing in Web Services, e.g., SOAP Fault messages etc..." See "Markup and Multilingualism."

  • [May 23, 2003] "DB2 Information Integrator Goes Live." By IT Analysis Staff. In The Register (May 21, 2003). IBM has "published the packaging and pricing of DB2 Information Integrator. The basic concept is predicated upon a federated database approach in which multiple heterogeneous databases appear to the user as if they were a single database. However, Information Integrator is not limited to accessing relational data sources - it can also access XML, flat files, Microsoft Excel, ODBC, Web and other content stores, although updates and replication are limited to relational sources in the first release. Thus the full capabilities of DataJoiner have not been implemented in this release, although beta testing has shown improved performance compared to that product... you can query data wherever it resides, as if it was at a single location, with a single view across all the relevant data sources. The product supports queries by caching query tables across federated sources, while the optimiser will validate the SQL used against the source database and will automatically compensate if the relevant syntax is not supported on the remote database. Other features of the federation capabilities of the product include the ability to publish the results of a query to a message queue and to compose, transform and validate XML documents. In terms of updates, replication and Information Integrator acts as a replication server, initially supporting Oracle, Informix, Microsoft, Sybase and Teradata databases, as well as DB2. Functions are flexible with support for both one to many and many-to-one topologies, table-based or transaction-based data movement (which may be dependent on whether you have batch or online requirements), and latency which may be scheduled, interval-based or continuous. Perhaps the most notable feature of the beta trial has been the reports of increased developer productivity. This is partly because there is less hand coding required and, more particularly, because SQL queries do not have to be decomposed to act across the various databases involved... The big question is whether the market will warm to the concept of the federated database, which is enabled through DB2 Information Integrator. Microsoft notionally embraces the concept but it has done little to implement it, while Oracle's approach runs directly counter to federalism, with the company espousing consolidation (centralisation) instead. Thus IBM has to market federation on its own..." See the reference following.

  • [May 23, 2003] "IBM DB2 Information Integrator V8.1". IBM Software Announcement Letter. May 20, 2003. 19 pages. Referenced from the IBM DB2 Information Integration website. "IBM DB2 Information Integrator V8.1 is a new product from IBM that provides the foundation for a strategic information integration framework that helps customers to access, manipulate, and integrate diverse and distributed information in real time. This new product enables businesses to abstract a common data model across data and content sources and to access and manipulate them as though they were a single source. DB2 Information Integrator V8.1 supports the predominantly read-access scenarios common to enterprise-wide reporting, knowledge management, business intelligence, portal infrastructures, and customer relationship management. To address customer requirements, this product is offered in five editions... DB2 Information Integrator V8.1 is primarily targeted at the application development community familiar with relational database application development. Applications that use SQL or tools that generate SQL (for example, integrated development environments, reporting and analytical tools) can now access, integrate, and manipulate distributed and diverse data through a federated data server. This product is most appropriate for projects whose primary data sources are relational data augmented by other XML, Web, or content sources. With the Federated Data Server, administrators can configure data source access and define integrated views across diverse and distributed data... Configuration is simplified by built-in functions that help the administrator discover relevant data sources and metadata. XML schema can be automatically mapped into relational schema... Applications can query integrated views across diverse and distributed data sources as if they were a single database: (1) The query is expressed using standard SQL statements. (2) Text search semantics can be used within the query. A fast, versatile, and intelligent full text search capability is provided across relational data sources, including data sources that either don't support native text search or don't provide a broad range of text search capability. Numerous search operations are supported (such as Boolean, wildcard, free-text, fuzzy search, proximity search for words within the same sentence or paragraph, or search within XML documents). The query can produce standard SQL answer sets or XML documents. XML documents can be generated from the federated source data to facilitate interchange or automatically validated against DTDs or XML schemas (3) SQL expressions can be used to transform the data for business analysis or data exchange. XML documents can be transformed using XSL for flexible presentation. Any Web service can be converted into a function call and used as a transformation. For example, a Web service that provides currency conversion can be used inline within the SQL expression. (4) Results can be made available to the rest of the organization by publishing them to a WebSphere MQ message queue using built-in functions (5) The federated server uses cost-based distributed query optimization to select the best access paths for higher query performance. It leverages intelligence about optimizing access to the data sources provided by the data source wrapper, by database statistics, and optionally by the administrator...." [cache PDF]

  • [May 23, 2003] "Microsoft Supporting UNeDocs Through Microsoft Office InfoPath. Smart Client For On-Line And Off-Line Processing of XML Forms." By Zdenek Jiricek (Public Sector Manager, Microsoft Eastern Europe). May 15, 2003. PDF from the 13 slides. From a presentation given at the May 2003 UNeDocs Seminar on electronic trade documents, held during the United Nations Forum on Trade Facilitation. "The Forum welcomed the efforts of the UNECE to develop and implement accessible tools and solutions for administrations and the trade community around the world." Jiricek identified current problems for trading: "(1) Data entry into various client apps multiple times [different client for different applications; unnecessary re-typing and data entry errors]; (2) Custom forms hard to use, inflexible, costly to maintain [Lacking rich editing experience; Static forms limit ability to provide required information; Difficult to modify existing forms and adjust processes]; (3) Hard to reuse data across business processes [Requires significant development work]... InfoPath benefits for UNeDocs: (1) Gather information more efficiently and accurately [Validation: via scripting or web services; Conditional formatting capabilities; Easy to implement digital signing, e.g., UPU web service]; (2) Manage information more flexibly [On-line, Off-line, and e-mail support; Visual creating / modification of forms; Dynamic forms w/ locking, creating optional fields]; (3) Take advantage of existing IT investments and knowledge [Web based solution deployment, supports any XML schema; Familiar Microsoft Office user experience]; (4) Share information across business processes [Connects to XML Web services; Can connect to multiple biz processes via MS BizTalk Server]..." (adapted excerpt) Canonical source .PPT, ZIP. See related information in "UNeDocs and InfoPath" in the news story "Microsoft and ACORD Use InfoPath for Linking Insurance Forms to XML Web Services." General references in "Microsoft Office 11 and InfoPath [XDocs]."

  • [May 23, 2003] "UN XML Project Gets Microsoft Support. Aims to Make Electronic Exchange of Business Documents Between Companies Cheaper." By Joris Evers. In InfoWorld (May 16, 2003). "A United Nations project that uses XML to give small and medium-size companies an alternative to paper forms when doing business across borders has won support from Microsoft. The Redmond, Wash., software maker at a UN conference in Geneva on Thursday demonstrated how the product of the XML project called UNeDocs would work in Microsoft's InfoPath application... UNeDocs, or United Nations extensions for aligned electronic trade documents, was started in 2002 by the UN's Economic Commission for Europe. The aim is to use XML to create an electronic equivalent for paper trade documents based on existing EDI (Electronic Data Interchange) standards, according to the UNeDocs Web site. EDI is expensive and use has generally been reserved for large enterprises. XML is going to make electronic exchange of business documents between companies cheaper with tools like InfoPath to support it, said Bobby Moore, product manager for InfoPath at Microsoft. The UN estimates that paper-based trade procedures cost about 10 percent of the value of exchanged goods. In 2000 that would have been 10 percent of $5.5 trillion in international trade, according to the UNeDocs Web site. The UN is drafting the XML electronic trade documents to create documents that can be understood and accepted internationally. Many of the alternatives that have been created by private parties are country or sector-specific, according to the UNeDocs Web site. In addition to the trade documents, the UNeDocs team also created online services for UNeDocs users. One service allows users to get the latest international trade, currency and country codes, for example. Another service converts a UNeDocs XML document into a document that can be viewed through a standard Web Browser. InfoPath is a new Microsoft information gathering application that can save data natively in XML. It is part of Microsoft's Office 2003 suite of productivity applications, which is currently in beta and planned to be commercially available in the second half of the year, Microsoft has said..." See related information in "UNeDocs and InfoPath" in the news story "Microsoft and ACORD Use InfoPath for Linking Insurance Forms to XML Web Services."

  • [May 21, 2003] "W3C Makes Patent Ban Final." By Paul Festa and Lisa M. Bowman. In CNET News.com (May 21, 2003). "The Web's leading standards body on Tuesday finalized a patent policy banning the use of most royalty-bearing technology in its technical recommendations, an issue that sparked a clash between open-source advocates and software makers. The Royalty-Free Patent Policy, announced by the Patent Policy Working Group of the World Wide Web Consortium (W3C), has changed little from a draft released eight weeks ago. In shutting fee-bearing patents out of standards development in all but exceptional cases, it marks a compromise between open-source advocates and proprietary software companies. Patents have been a flashpoint in a battle between the open-source community and proprietary software companies. Some proprietary software makers cash in on large patent portfolios by requiring licensing fees and may be reluctant to give away the rights to intellectual property after investing time and money creating the technology. On the other hand, many in the open-source community believe patents impede the development process and can clog the adoption of standards... Bruce Perens, a prominent patent foe and a participant in the W3C's deliberations, applauded the move, while warning that the consortium had left its process vulnerable to 'submarine' patents. 'It's not bulletproof,' Perens said in an interview with CNET News.com Wednesday. 'But it's an improvement.' Perens -- a cofounder of the Open Source Initiative who recently received a $50,000 grant to fund his antipatent activism -- said that while the W3C patent policy represented a victory for patent-free standards and the open-source software development projects that rely on them, it also left the standards-setting process vulnerable in many ways. One such threat is the so-called submarine patent, which is a patent filed, but not granted, at the time a W3C technical recommendation is under construction. '(Patent holders) don't tell anyone about (the patent), and it becomes granted, and then it's the first time we can see it,' Perens said. 'We will try to watch out for people's patents. But patent searches are rarely conclusive, because patents are so poorly descriptive.' The major variable in the patent-policy process, according to Perens, is identifying when royalty-encumbered technology has been 'knowlingly,' or intentionally, submitted for standards consideration by a company. The W3C working group member representing that company may be ignorant of the royalty -- or may be kept ignorant by design... IBM, in the awkward position of being between the wealth of its patent portfolio and the demands of open-source developers on whom it relies, declined to say how it had voted on the proposal. But an IBM representative said the company supported the finished policy. 'IBM believes that the resulting patent policy is well-developed and defined, and is appropriate to attend to the needs of Web infrastructure vendors and users alike,' representative Scott Cook wrote in an e-mail interview..." See: (1) details in "W3C Approves Patent Policy Supporting Development of Royalty-Free Web Standards"; (2) general references in "Patents and Open Standards."

  • [May 21, 2003] "BEA Gains Ally in Battle With IBM." By Martin LaMonica. In CNET News.com (May 21, 2003). "Software maker BEA Systems on Wednesday got a boost in its ongoing application server battle with IBM through a bundling deal with Hewlett-Packard. HP said it will install BEA's WebLogic Java application server software on all of its hardware servers. Application server software is used to build and run custom business applications, such as corporate Web sites or order management systems. Through the deal, BEA will gain the distribution channel of HP's hardware systems, which could help the software maker ward off competition from IBM. Big Blue took the lead in the Java application server market away from BEA in 2002, according to research firm Gartner Dataquest. Analysts say IBM has an advantage over its rivals because it can bundle its own WebSphere Java software with its hardware servers at a discount. By becoming the default Java application server on HP hardware, BEA greatly expands its distribution on servers ranging from low-cost Linux machines to high-end HP NonStop systems. The latest bundling arrangement extends an existing partnership between HP and BEA, through which HP sells WebLogic with its Unix servers. Under the deal announced Wednesday, HP will also deliver WebLogic with its ProLiant Linux servers and its AlphaServers servers that run the OpenVMS operating system. In June, HP will bundle WebLogic with its HP NonStop servers as well. HP and BEA will provide post-sales support for WebLogic on all HP's server hardware, as well..."

  • [May 21, 2003] "W3C Announces Royalty-Free Patent Policy." By Edd Dumbill. From xmlhack.com (May 21, 2003). "In a significant step to protect the freedom to implement Web standards the W3C has announced the publication of its patent policy. After long debate, the royalty-free policy has been implemented as a result of widespread consensus. In his Director's Decision -- the first time that such a decision has been made public outside of the W3C -- Tim Berners-Lee writes: 'The Policy affirms and strengthens the basic business model that has driven innovation on the Web from its inception. The availability of an interoperable, unencumbered Web infrastructure provides an expanding foundation for innovative applications, profitable commerce, and the free flow of information and ideas on a commercial and non-commercial basis.' The effect of the patent policy is that all who participate in developing a W3C Recommendation must agree to license patents that block interoperability on a royalty-free basis. Certain patent claims may be excluded from the royalty-free commitment by members -- after considerable deliberation, and with substantial consensus among those involved and the W3C membership. These must also be disclosed early in the development process so they do not jeopardize a specification once it is developed. Patent disclosures are required from W3C members, and requested of anyone who sees the technical drafts and has actual knowledge of patents which affect a specification. Tim Berners-Lee announced the release of this patent policy during his keynote address at WWW2003, and was met with applause from the audience... In a press conference given at WWW2003, Daniel Weitzner of the W3C described how the new patent policy finally formalizes what used to be the implictly accepted principle that web technology should be freely implementable. That implicit understanding was challenged five years ago, said Weitzner, particularly in the Platform for Privacy Preferences Working Group, which was held up for a year by patent-related uncertainty. The patent policy is designed to prevent these problems recurring, including as it does processes for resolving patent-related issues..." See: (1) details in "W3C Approves Patent Policy Supporting Development of Royalty-Free Web Standards"; (2) general references in "Patents and Open Standards."

  • [May 20, 2003] "The Center of the Universe." By Ken North. In Intelligent Enterprise Volume 6, Number 9 (May 31, 2003). ['XML, Web services, analytics, and other hot technologies have the leading relational DBMS providers working overtime to remain the best choice for managing all of your data. Here's a look at what IBM, Microsoft, and Oracle are doing.'] "Whatever form software takes in the next decade, databases will continue as the primary tool for managing data. DBMSs from rivals IBM, Microsoft, and Oracle will provide persistent data management for Web services, embedded applications, Web stores, grid services, and other software. But SQL DBMS products will increasingly be judged on how well they support traditional tasks (such as transaction processing) while evolving to provide new capabilities (such as integrated business analytics). The latest releases of data management software from the big three vendors unite SQL with multidimensional and document-centric (XML) data and grid computing. Whether an organization follows a best-of-breed approach or taps a single vendor to build an IT infrastructure, problems can arise with interoperability, data aggregation, and data and application integration. That's why XML, XML-based messaging, XML-enabled databases, and Web services have become increasingly important. But XML is only one of the fields on which the database software giants are competing. Although there are now fewer SQL database vendors than a decade ago, competition remains fierce among IBM, Oracle, and Microsoft. Each company tries to gain an edge over the others by complementing their database platforms with broad-spectrum software offerings such as vertical market applications and developer tools. The DBMS products from each of these vendors provide parallel processing, extensible servers, online analytic processing (OLAP), tight integration with messaging software, and support for XML and Web services. The products diverge when it comes to programming database server plug-ins, querying multidimensional data sets, persisting message queues, orchestrating the flow of Web services, and processing audio, video, and other rich data types. This overview of the different strategies vendors are following sheds light on their plans for developing technologies to extend the SQL DBMS to handle business intelligence (BI), XML, Web services, and grid requirements..." See also Ken North's interviews with: [1] Rob High and Nelson Mattos of IBM; [2] Andrew Mendelsohn of Oracle; [3] Jim Gray and Michael Rys of Microsoft. General references in "XML and Databases."

  • [May 20, 2003] "Data As It Happens." By Mark Madsen. In Intelligent Enterprise Volume 6, Number 9 (May 31, 2003). ['Data in real time, all the time, is what many enterprises want. To make it happen, IT needs to get the big picture -- and not burn out on one-off solutions for single applications.] "You can deliver data in real time from one application to another without too much effort. It's far more challenging to create a real-time data-delivery infrastructure that allows an enterprise to easily integrate applications on a repeatable basis, and yet is also flexible enough to accommodate change. Unfortunately, IT has historically built or bought applications based on what specific business units required. While the systems often deliver the functionality the business units wanted, a great deal of time is spent tying together the menagerie of systems. Integration starts to take up a greater and greater share of IT's budget... The best way to begin sorting out something as complex as integration infrastructure is to work through some conceptually simple models and use them as analogies. Once you settle on a general model and its components, you're closer to selecting the technologies for implementation... An interconnection model, or topology, is the highest-level model abstraction, offering a view of how systems will connect with each other. This topological view is important because it articulates how developers think about the organization's integration problems, and gives everyone a critical communication tool for discussions with business management... I here discuss three conceptual models. Remember, conceptual model selection isn't about fitting the model with a particular set of applications and technology, which is unlikely to be perfect anyway. The goal is to settle on a model that best fits the objectives of your organization. (1) In the point-to-point topology conceptual model, one application locates and requests data from another and sends it directly to that application; (2) In the information pipeline (or 'bus') model, all applications connect to a bus, which is common pipeline with a standard interface that dictates communication syntax and protocol; (3) In the hub-and-spoke model, data flows from applications at the ends of the spokes into a central hub, where it's routed out to applications that request it... Conceptual and interaction models are the top levels of the architecture. While they describe the design principles and the services provided to an application, they say nothing about the components of the architecture needed to support those services. Infrastructure components are elements from which you build your real-time data architecture. By understanding these, you can make intelligent choices about what technology to use and whether to buy products or build the infrastructure components..." See also "Business Process Management: The Future of Real-Time Data Integration?" -- how business process management fits into the evolution of real-time data integration.

  • [May 20, 2003] "Microsoft Unwraps New Visio Tool. Office Visio 2003 Improves Users' Access to Server-Based Business Data." By Ed Scannell. In InfoWorld (May 20, 2003). "Microsoft has unveiled a new version of its Visio diagramming tool that aims to enable corporate users to take better advantage of traditional Office desktop applications and also connect desktop users with server-based line-of-business data. The newly named Microsoft Office Visio 2003, which will be a more integral part of Microsoft's Office System, will be aimed not just at the product's core technical users but at IT professionals and business users looking to use data more effectively and reduce the 'manual labor' involved in pulling together information from multiple sources. 'With this release we want to broaden the usage among business users by communicating more clearly to them what the value proposition is. We think it can convince some people to widen its usage and make it a more relevant tool for the enterprise,' said Jason Bunge, Visio's product manager based in Redmond, Wash. Microsoft has four goals with Visio 2003, according to Bunge, including getting customers to use it more as a smart client in order to consume and deliver Web services, as a process management tool that can quickly document and map out how users conduct business, to better leverage a number of IT assets and investments, and to achieve greater worker productivity. For example, whereas an HR manager might use the existing product to create and view a straight-forward organizational chart to determine an employee's position and title, with the new version managers can access server-based data, such as who is leaving the company and when, view salaries and performances of individuals, and compare that with real-time data. 'Through Web services and XML people can now take an org chart and tie it to back-end systems and pull up last month's review scores for a certain division, and maybe color code them so you can see quickly who are the strongest and weakest performers,' Bunge said. As one example of Visio being able to better leverage Office applications, Bunge said users could access a timeline within Microsoft Project and then share it with anyone on their team inside or outside the company or present a progress report to senior management..." See the announcement "Microsoft Office Visio 2003 to Provide Customers With New Tools to Visualize Their Business. Customers and Partners Excited About the Possibilities Microsoft Office Visio 2003 Offers as a Smart Client to Capture Data in Real Time and Enable Organizations to Be More Productive."

  • [May 20, 2003] "DARPA Pads Semantic Web Contract." By Michael Singer. From Internet.com News (May 20, 2003). "Web software developer Teknowledge Tuesday said it has won an extended government contract to help build an evolving version of the World Wide Web that centers on the meaning of words. The Palo Alto, Calif.-based firm said the Defense Advanced Research Projects Agency (DARPA) has added $634,057 to its now $1.7 million budget to help build the DARPA Agent Markup Language (DAML). The contract is centered on developing an Agent Semantic Communication Service, which lets users access relevant data through an XML framework. DARPA calls it the 'Semantic Web'. For example, when you tell a person something, he can combine the new fact with an old one and tell you something new. When you tell a computer something in XML, it may be able to tell you something new in response, but only because of some other software it has that's not part of the XML spec. DARPA is using the new language to create a program that assigns similar semantics to a 'subProperty' tag. DARPA, which was responsible for funding much of the development of the Internet we know today, has been working on the DAML spec since August 2000 as a way to augment the Web and improve data mining. Currently, the agency is working with the W3C through a various working groups to implement it. Teknowledge says the open source framework will be one of the tools that drives the Semantic Web from research vision to practical reality... The technology is steeped in Teknoledge's Suggested Upper Merged Ontology (SUMO) and the company says it has mapped it to over 100,000 word senses in the WordNet natural language lexicon. The company says its research helps give technical users a comprehensive language for asking precise questions. 'Our software allows users to get answers to questions that have been precisely defined, rather than just a sampling of documents that contain relevant keywords, as in most current Web searches,' said Teknowledge Director of Knowledge Systems Adam Pease. 'We can also perform inference during search that allows users to get answers that are not literally on the Web, but must be inferred by the computer software.' Pease said the next step in this process is to provide a simplified language to support non-technical end users..." General references in: (1) "DARPA Agent Mark Up Language (DAML)" and (2) "Markup Languages and Semantics."

  • [May 20, 2003] "Web Services Security and More, Part 2: The Global XML Web Services Architecture (GXA)." By Joseph M. Chiusano (Booz Allen Hamilton). From Developer.com (May 20, 2003). In Part 1 Chiusano provided highlights from the majority of the GXA specifications that were released up to (and including) December 2002 [WS-Security, WS-Policy, WS-SecurityPolicy, WS-PolicyAssertions, WS-PolicyAttachment, WS-Trust, WS-Routing, WS-Referral, WS-Transaction]. In Part 2 the author covers the remaining GXA specifications: (1) WS-SecureConversation, which builds on WS-Security to allow the establishment of a security context for a message exchange; (2) WS-Inspection, the Web Services Inspection Language which allows a site to be inspected for service offerings regardless of the format of the description documents for these service offerings; (3) WS-ReliableMessaging for reliable message delivery; (4) WS-Addressing, which defines a transport-neutral mechanism for identifying Web service endpoints and securing end-to-end endpoint identification in messages. "GXA is an application-level protocol framework built on top of XML and SOAP that provides a consistent model for building infrastructure-level protocols for Web services and applications. By doing so, GXA 'fills the gap' in the current Web services stack. The GXA specifications can be grouped in seven main 'concentrations': security, policy/trust, routing, coordination, federation, inspection, and messaging... GXA is poised to play a major role in advancing the adoption of Web services through its robust specification of mechanisms for Web services such as security, policy, coordination, federation, and routing. More specifications will be forthcoming for areas such as privacy, federation, and authorization..."

  • [May 20, 2003] "Business Process with BPEL4WS: Learning BPEL4WS, Part 8. Using Switch, Pick, and Compensate." By Rania Khalaf and Nirmal Mukhi (Software Engineers, IBM TJ Watson Research Center). From IBM developerWorks, Web services. May 2003. ['This article illustrates the use of three more BPEL activities: switch, pick, and compensate. In addition to showing how you can branch on conditionals using <switch>, we show how you can use <pick> to branch based on incoming messages or timeouts. A simple explicit compensation example is also presented to show how committed actions may later be undone.'] "In the previous articles we took a simple flow that can invoke one Web service, and added another partner, control logic through links and conditions, data manipulation using assignment, nested activities and scopes, and finally correlation sets and fault handlers. In this article, we take our loan approval example and present three different scenarios showing the use of the compound activities <switch> and <pick>, as well as a simple compensation handler... In these scenarios, we have brought together most of the main concepts of BPEL4WS. As the last tutorial-type article in this series, it provides you with a better feel for how to bring activities together to create simple as well as complex processes. Feel free to do what we have done here to further your understanding of the language: take the resulting .bpel file and play around with the syntax to try out the different constructs in the language and see how they affect the flow of control as well as the complexity of what you would like to express in your Web services compositions. You can use the Eclipse editor available with BPWS4J on alphaWorks (see Resources), and graphically substitute an activity for another or change its properties without having to worry about the rest of the process. For example, you can take a nested activity and request that it be wrapped in a scope and then add handlers to it..." [Note: this article refers to BPEL4WS v1.0, not to v1.1.] Article also available in PDF format. See the previous parts in the series. General references in "Business Process Execution Language for Web Services (BPEL4WS)."

  • [May 20, 2003] "OEBPS: The Universal Consumer eBook Format?" By Jon Noring. In eBookWeb (May 19, 2003). ['In the article, I outline seven requirements a universal consumer ebook format must fulfill (one of which is compatibility with an XML-based publishing workflow), and show that OEBPS adequately fulfills those requirements. The XML conformance of OEBPS certainly plays a fundamental role in the attractiveness of OEBPS as a universal ebook format, as noted several times in the article. Your thoughts and criticisms are welcome.'] "Looking at the ebook landscape today, I am troubled by the large and growing number of essentially incompatible, proprietary consumer ebook formats and associated ebook reading applications and hardware... Publishers, both large and small, are now overwhelmed by the need to supply their content to end-users in these formats, many of which do not integrate well into their publishing workflow... Likewise, end-users are equally confused by the myriad formats, and chagrined by the incompatibility between them, making it more difficult to use multiple devices, OS, and reading software of their choice. End-users clearly do not wish to be tied to any one hardware or software platform for the ebooks they purchase -- they want their ebooks to be optimally readable on the systems of their choice, now and into the future... Is a single, universal consumer ebook format possible, one which meets nearly all the needs of both publishers and end-users? This article presents a vision for such a universal consumer ebook format, to outline the important requirements, and demonstrate that, yes, there now exists just such a format meeting these requirements: The Open eBook Publication Structure (OEBPS)... The OEBPS Specification is maintained by the Open eBook Forum, a non-profit and independent ebook standards and trade organization representing a large number of companies and organizations with quite diverse (and oftentimes competing) interests in the ebook universe... The OEBPS Specification specifies a coherent, ebook-optimized framework for organizing XML documents containing book content into a powerful ebook representation of the work... The word framework is especially important, because without an overarching framework it is not possible to adequately represent the richness and specific intricacies of book publications using a simple collection of independent hypertext-linked XML documents. Three distinct quantities in the OEBPS universe must be defined: OEBPS Publication, OEBPS Package, and OEBPS Document. An OEBPS Publication is the complete set of files comprising an ebook publication conforming to the OEBPS Specification. An OEBPS Publication must include one OEBPS Package document (which is an XML document, not part of the book content itself, describing the Publication's organizational framework), and at least one OEBPS Document (which is an XML document containing part or all of the book's actual content.) Other auxiliary files, such as images, style sheets, etc., may also be present in the OEBPS Publication..." See "Open Ebook Initiative."

  • [May 20, 2003] "W3C Readies New Tech Patent Policy. It Hopes to Stop Vendors' Patent Claims from Slowing Web Standards Work." By Carol Sliwa. In Computerworld (May 19, 2003). "The World Wide Web Consortium (W3C) is poised to unveil a formal policy for dealing with technology patents that have the potential to block the development of interoperable Web standards. Tim Berners-Lee, director of the Cambridge, Mass.-based W3C, said a decision on the patent policy is due to be announced 'very shortly,' now that the organization's management team has reviewed feedback collected during a public comment period that ended April 30, 2003... A W3C working group has spent more than three years developing a precisely defined patent policy to replace the 'minimalistic' and 'very loose' provisions that currently require members who know of patent claims relevant to ongoing standards work to disclose them, said Daniel Weitzner, chairman of W3C's patent policy working group. Weitzner said the policy drafted by the working group reflects the 'overwhelming goal' of producing standards that can be implemented royalty-free. But the group also included an exception provision that will make it possible for members to consider alternate licensing terms when it's deemed impossible to meet the royalty-free goal, he said... The need to establish a formal policy became apparent as some patent holders started to assert claims to technology being used as part of proposed Web standards. One notable case involved a claim from Seattle-based Intermind Corp., now known as OneName Corp., that the W3C's Platform for Privacy Preferences might infringe on a patent it held. More recently, the W3C's patent policy became a hot topic of discussion among some W3C members who have speculated why IBM, Microsoft Corp. and other vendors have been submitting some key Web services standards proposals to the Organization for the Advancement of Structured Information Standards (OASIS) instead of the W3C... Weitzner said the exception provision 'was an effort to address the concerns of a number of members.' He added that the proposed exception procedure was designed to be hard to use 'because the working group didn't want it to be used often'..." See: (1) details in "W3C Approves Patent Policy Supporting Development of Royalty-Free Web Standards"; (2) general references in "Patents and Open Standards."

  • [May 20, 2003] "OASIS OKs UDDI 2. Next Up, An Expanded 3.0 Version. UDDI is Seen As the Basis of a Standards-Based Web Services Architecture." By Stacy Cowley. In Computerworld (May 20, 2003). "The Organization for the Advancement of Structured Information Standards (OASIS) yesterday said it has ratified Version 2.0 of the UDDI Web services specification, an important step toward finalization of the widely awaited, significantly expanded 3.0 version of the standard. Together with XML, SOAP and the Web Services Description Language, UDDI forms the foundation for the standards-based Web services architecture many vendors are endorsing as a way to link disparate data and applications. Of the four standards, UDDI has been the most sparsely adopted. Security issues have held up wide implementation of the specification for publishing, locating and integrating with Web services, but a number of vendors backing UDDI say features planned for Version 3 will help address those concerns..." See: (1) details in the news story "UDDI Version 2 Approved as an OASIS Open Standard"; (2) General references in "Universal Description, Discovery, and Integration (UDDI)."

  • [May 19, 2003] "Introduction: Digital Rights Management and Fair Use by Design." By Deirdre K. Mulligan (Acting Clinical Professor and Director of the Samuelson Law, Technology, and Public Policy Clinic in Boalt Hall at the University of California, Berkeley). In Communications of the ACM (CACM) Volume 46, Number 4 (April 2003), pages 30-33. Introduction to the special issue by the guest editor. "The fair-use exceptions in U.S. copyright law are being undermined by rules programmed into consumer electronics and computers that reflect the exclusive interest of rights holders alone... That privately constructed rules may circumvent or conflict with societal values and public policy is well known and has many manifestations, many predating the Internet and computers. The question of whose rules should govern and the space in which private rules can constrain or contradict democratically instituted social policies is a long-standing one. The use of, for example, property rights, states' rights, and other proxies for private interests has a long legacy in law and social practice. Today, while the law allows average citizens to time-and-device shift music and movies they own, and the First Amendment of the U.S. Constitution allows them to engage in parody, the medium of delivery or device may independently limit their ability to do so. Such default limitations arise in part because the security model underlying DRM architecture is a poor fit for modeling copyright policy. DRM architecture, which is based on binary permit/deny schemas, envisions copyright holders unilaterally setting the terms under which their products are used. Copyright law is, however, multidirectional..." See related articles in "ACM Publishes CACM Special Issue on Digital Rights Management and Fair Use." General references in "XML and Digital Rights Management (DRM)."

  • [May 19, 2003] "Fair Use, DRM, and Trusted Computing." By John S. Erickson (Principal Scientist, Digital Media Systems Program, Hewlett-Packard Laboratories, Norwich, VT). In Communications of the ACM (CACM) Volume 46, Number 4 (April 2003), pages 34-39. With 12 references. ['How can DRM architectures protect historical copyright limitations like fair use while ensuring the security and property interests of copyright owners? In this article, Erickson explores DRM architecture and its relation to trusted computing platforms, as well as the disconnect between the security paradigm from which today's DRM systems originate and the exception-riddled, context-laden nature of copyright law. He suggests a DRM architecture that would provide enough space for the exercise of fair use-like rights.' "The ability of providers to reliably and deterministically impose rules on the end-user experience raises the question of who sets the rules dictating how users interact with digital information on their personal systems. Will the social policies and common practices that have traditionally influenced the copyright process be replaced by rules privately constructed by content owners and software providers? Will they be privately enforced by operating systems and DRM technologies? Conversely, can these emerging architectures help protect the limitations on copyright owners' exclusive rights, preserving the flexible fair use doctrine? Here, I explore how access-control policies are evaluated, especially in the case of two rights expression languages -- the eXtensible rights Markup Language (XrML; see xrml.org) and the eXtensible Access Control Markup Language (XACML; see www.xacml.org). Since the expression and interpretation of policies is but one layer of the general problem of asserting and protecting copyright with computer code, I emphasize the role of trusted systems in ensuring that computing agents interpret policies in reliable and deterministic ways. I also weigh the challenges inherent in expressing and enforcing policies that mimic social policies. Engineers often seek to simplify problems, but when the problem involves implementing legal statutes (such as copyright) with executable code, simplifications might actually do damage, especially if the solution gives either party more power to assert control than the law entitles..." See related articles in "ACM Publishes CACM Special Issue on Digital Rights Management and Fair Use." General references in "XML and Digital Rights Management (DRM)."

  • [May 19, 2003] "DRM {and, or, vs.} the Law." By Pamela Samuelson (Chancellor's Professor of Law and Information Management at the University of California at Berkeley and Director of the Berkeley Center for Law & Technology). In Communications of the ACM (CACM) Volume 46, Number 4 (April 2003), pages 41-45. With 7 references. 'The main purpose of DRM is not to prevent copyright infringement but to change consumer expectations about what they are entitled to do with digital content. Samuelson covers the varied relationships between DRM and the law, explaining that DRM provides potentially far more control to copyright holders than the law provides or permits and that, in its current legal interpretation, the Digital Millennium Copyright Act (DMCA) of 1998 provides nearly unlimited protection to DRM. This special status, she writes, creates a risky environment for those who wish to circumvent DRM to exercise historically protected rights to use information. Warning that DRM, whether through technical standards or congressional mandate, threatens to further erode the public side of the copyright balance, she calls on computing professionals to defend general-purpose computing technologies and support legislative consumer-protection measures related to DRM-protected content.' "DRM is sometimes said to be a mechanism for enforcing copyrights. While DRM systems can certainly prevent illegal copying and public distribution of copyrighted works, they can do far more; they can as easily prevent the copying and distribution of public-domain works as copyrighted works. Moreover, even though copyright law confers on copyright owners the right to control only public performances and displays of these works, DRM systems can also be used to control private performances and displays of digital content. DRM systems can thwart the exercise of fair use rights and other copyright privileges. DRM can be used to compel users to view content they would prefer to avoid (such as commercials and FBI warning notices), thus exceeding copyright's bounds..." See related articles in "ACM Publishes CACM Special Issue on Digital Rights Management and Fair Use." General references in "XML and Digital Rights Management (DRM)."

  • [May 19, 2003] "Users Still Bullish on Web Services." By John Fontana. In Network World (May 19, 2003). "Galileo International hit a home run last year with a pilot of four Web services designed to help extend the reach of its travel booking and itinerary services. The number of Galileo customers using the services -- now in production mode -- has risen from one to 35, and those customers are clamoring for more. The company, buoyed by cost savings and new business opportunities associated with the project, last week started rolling out Web services globally... Galileo says there has been a bigger effect internally, where the Web services slashed the cost and development time of a dozen new products last year by up to 50%. It has added up to millions of dollars in savings and additional revenue. Reuse of Web services components and increased flexibility are big themes and have touched off development of an SOA. An SOA consists of application components that live as services on a network and can be assembled in infinite combinations. The company has built an authentication and authorization gateway it calls Expo and a service brokering engine that will form one secure entry point for all Web services calls to the GDS, which handles 350 million messages per day and more than 1 billion fare quotes per year. 'We will have different conversion capabilities to go from XML to Java and back again, different brokering technologies, different service-delivery network technologies,' Wiseman says. 'We will need to validate things like interoperability, where we can do logging and so on.' He expects to have the SOA reference implementation done this year and a production model in 2004... The SOA idea also caught the attention of Eastman last year. The Kingston, Tenn., company's Web service debuted last year and lets customers get product information in real time instead of the old way of copying or screen scraping data. That service is relatively unchanged, but it has turned on a new light. 'We started to apply Web services to projects internally, and we recognized the need for Web services management as we started to build more complex applications,' says Carroll Pleasant, Eastman's principal technologies analyst. He says security, monitoring, caching, process workflow, failover and a registry of available services became necessary after the company built a couple of applications, including one called Management Scorecard that integrates 23 internal Web services..." [See "How Galileo Web Services Work."]On Galileo, see "Galileo International Launches Web Services Platform Globally Enabling Clients To Build Customized Applications. Travel Industry's First Web Services Solution Leverages Galileo's GDS, Saves Up to 80% in Development Time and Lowers Cost."

  • [May 19, 2003] "WASP Gives Web Services Development Some Sting. WASP Server and Developer Tools Handle Real-World Web Services Management." By Phillip J. Windley. In (May 16, 2003). "... real-world Web service deployments can get messy and require a serious development effort. Getting the job done requires development tools, a cluster of run-time servers, and management tools to configure and operate the services once they're deployed. WASP Server and WASP Developer from Systinet do a nice job of providing these professional-grade tools for creating and deploying Web services in both Java and C++. WASP Server is a full-featured Web services run-time environment. I tested WASP Server 4.5.1 for Java (the C++ server has the same feature set but differs in how it is deployed). Because both Java and C++ versions are available, it is fairly easy to expose any Java or C++ program as a Web services. A Web services run-time server has two primary parts. First, a SOAP message processor parses SOAP messages, including the serializing and deserializing of parameters. And second, a Web service container wraps the business logic for the Web service and provides services such as security, life-cycle management, and resource management. This architecture enables the WASP Server to completely divorce security issues from the business logic, freeing Web services developers from having to write arcane security code and creating an environment where security is configured through an easily managed administrative console. A big selling point for the WASP Server is performance. The server provides load balancing and clustering, and the container architecture allows the server to manage resources such as threads. Systinet also lays claim to a proprietary XML-processing system that manages XML operations as streams rather than batch jobs. This can reduce system resource demand when processing large blocks of XML..."

  • [May 19, 2003] "Microsoft Courting OMG Again." By Darryl K. Taft. In eWEEK (May 12, 2003). "After years of bitter battles with the Object Management Group, Microsoft Corp. may be poised to rejoin the consortium to make use of the OMG's architecture expertise. Unisys Corp. has been brokering a thaw in the relationship between Microsoft and the Needham, Mass., standards body that had become strained while Microsoft was still a member of the group, said sources. The two sides effectively parted ways in the late 1990s over the OMG's support for Common Object Request Broker Architecture, which competed with Microsoft's Component Object Model for a standard distributed computing model. Sources said Microsoft is now warming up to the OMG because the Redmond, Wash., company is delving more into modeling and architecture work, two areas where the OMG holds key specifications and expertise -- in UML (Unified Modeling Language) and MDA (Model Driven Architecture). Microsoft has sponsored two four-day OMG Web services workshops this year, one in Munich, Germany, and one last month in Philadelphia, where Microsoft representatives gave presentations on services-oriented architectures and MDA issues. Unisys, a member of the OMG and a tight Microsoft integration partner, jointly presented with Microsoft. Their presentation was titled 'Microsoft 'Jupiter' and the Unisys MDA Process'... Microsoft supports UML today in its Visual Studio .Net Enterprise Architect edition, and sources said Microsoft plans to support MDA in Jupiter, the code name for its upcoming e-business suite and collaboration portal software. Another source close to Microsoft said the company is becoming more interested in model architectures because of the growing complexity in enterprise systems. The warming toward models and the OMG does not signal a total burying of the hatchet between the organizations. For instance, sources said Microsoft does not agree with everything the OMG does with MDA, such as automatic code generation..." [From the MDA website: 'Key standards that make up the MDA suite of standards include Unified Modeling Language (UML); Meta-Object Facility (MOF); XML Meta-Data Interchange (XMI); and Common Warehouse Meta-Model (CWM).'] See: (1) MDA website; (2) "OMG Model Driven Architecture (MDA)"; "XML Metadata Interchange (XMI)"; "OMG Common Warehouse Metadata Interchange (CWMI) Specification."

  • [May 19, 2003] "SCO Signs Pact With Microsoft, Warns Users on Linux Use." By Jack Vaughan and John K. Waters. In Application Development Trends (May 19, 2003). "The SCO Group today said it has licensed its Unix technology, including patent and source code licenses, to Microsoft Corp. SCO claimed in a statement that 'the licensing deal ensures Microsoft's intellectual property compliance across all Microsoft solutions and will better enable Microsoft to ensure compatibility with Unix and Unix services.' The announcement follows Friday's SCO warning to about 1,500 corporate users that unauthorized use of its Unix source code, which the company claims has been illegally incorporated into Linux, could make them liable for violating intellectual property rights... In the letter to corporate Unix users, SCO president and CEO Darl McBride charged that 'Linux is an unauthorized derivative of the Unix operating system ...' and that '... legal liability for the use of Linux may extend to commercial users.' [...] SCO, which now bills itself as 'the owner of the Unix operating system,' has pursued Unix licensing deals before... SCO maintains that it owns all source code, source documentation, software development contracts, licenses and other intellectual property that pertain to Unix-related business originally licensed by AT&T Bell Labs to all Unix distributors, including HP, IBM, Silicon Graphics, Sun Microsystems and many others..."

  • [May 19, 2003] "Berners-Lee: Standards Groups Are 'Very Different Places'. He Discussed How the W3C and OASIS Tackle Web Services Standards." By Carol Sliwa. In Computerworld (May 19, 2003). ['Tim Berners-Lee, director of the W3C, spoke with Computerworld this month about recent moves by technology vendors to submit Web services standards proposals to OASIS instead of his organization.'] TBL: "OASIS and the W3C are very different places altogether. The rules are very different. At the W3C, a lot of that has to do with getting everybody on board, making sure everything is coordinated and trying to get the standard very widely deployed. We require a demonstration of implementation, of interoperability, before something can become a standard. We have public review. We have requirements that the groups be chartered to liaise with groups which have related technology... For core standards for a big new market area, I think it's very important that they are widely accepted by everybody... For an application-level specification or for something like a programming language, you can have three, four, five, eight of those around and mix them together to a certain extent. So in some areas, it doesn't hurt. But for the foundational specs of the Web services architecture, I feel that it's important to have the W3C... I do have concerns about [about what's happening with the BPEL specification within OASIS and the W3C's similar standards efforts on choreographing Web services]. I feel that's an area where it's not obvious how things are going to work out for the best, because there's no mechanism for W3C and OASIS to coordinate..."

  • [May 19, 2003] "Implement Secure .NET Web Services with WS-Security." By Klaus Aschenbrenner (Solvion). From DevX.com (May 12, 2003). ['Implement secure .NET Web services by digitally signing, encrypting, and adding security credentials to SOAP messages.'] "Web Services Enhancements 1.0 for Microsoft .NET (WSE) provides the functionality for Microsoft .NET Framework developers to support the latest Web services capabilities. WSE is the cornerstone of the GXA (Global XML Web-Services Architecture), an architecture of proposed Web services standards that Microsoft, IBM, and other companies have written. This article examines the GXA's WS-Security spec and demonstrates how you can use it to implement secure .NET Web services by digitally signing, encrypting, and adding security credentials to SOAP messages. It gives you a brief introduction to the WSE and has shown how you can program secure Web services using WS-Security. This current WSE release offers you the possibilities to sign and encrypt Web service requests. Its actual implementation is the cornerstone of the GXA architecture, which provides many more proposal standards for Web services infrastructure issues... WSE is implemented as a SOAP extension and therefore must be registered within the web.config file of your Web service. To accommodate this task, the web.config file contains the <soapExtensionTypes> element. Within this element you can configure all SOAP extensions, which should be available to your Web service at runtime..."

  • [May 17, 2003] "XML Group to Help First Responders." By Dibya Sarkar. In Federal Computer Week (May 12, 2003). "A consortium of private- and public-sector organizations, university groups and nonprofit agencies are driving an initiative to create standards for using Extensible Markup Language to help first responders and others communicate and exchange information during emergencies. The group, known as the Emergency Management XML Consortium, expects to submit the first specification to the Organization for the Advancement of Structured Information Standards (OASIS) by year's end. XML eases the exchange of information by tagging data so disparate applications and systems can recognize it. The lack of interoperable equipment has been a concern for many public safety officials for years, but consortium members said the September 11, 2001, terrorist attacks -- when New York City firefighters and police experienced major communications problems with devastating consequences -- spurred them to act... The consortium, launched last October, is composed of 52 members so far. The group includes an 'open tent' executive committee created to provide policy guidance for standards development as well as outreach and education. It also has a technical committee, formally accepted by OASIS, which will actually design and develop the XML schema-based standards. Before being accepted by OASIS, the consortium conducted considerable research about what was being done in the XML arena. Rather than reinventing the wheel, [Matt] Walton said the group wanted to build on what's already been accomplished. Members of the group learned that there is 'not a focused, clearly delineated set of Web standards already under way' for incident management, he said. 'There was actually sufficient justification to look at that as a discrete area within a context of all these other areas.' Mark Zimmerman, program manager for the Disaster Management E-Gov Initiative at the Federal Emergency Management Agency, said there's a need to foster communication among parties whether they have a homegrown or commercial product, but any national approach is best done on a voluntary basis and not as a federal mandate to state and local governments... Allen Wyke, the technical committee's leader and chief information officer for Blue292 Inc., a developer of crisis information management software, said he likens the development of emergency management XML standards to building kitchen cabinets... Core areas being researched include unified incident identification, emergency geographic information system data accessibility and usage, notification methods and messaging, situational reporting, source tasking, and asset and resource management. The group will also look at issues surrounding financial tracking and public health as it relates to emergency management. Initially, subcommittees will focus on GIS data, messaging and infrastructure... The consortium has set forth an aggressive timetable with the goal of having the first draft of an XML-based standard by the end of June [2003]..." See: (1) "OASIS Emergency Management TC"; general references in "XML and Emergency Management."

  • [May 16, 2003] "Adding SALT to HTML." By Simon Tang. From XML.com (May 14, 2003). ['Add speech to your applications with SALT, Speech Application Language Tags. The author introduces multimodal XML technology, specifically SALT; using Microsoft's .NET Speech SDK, developers should be able to add SALT elements to HTML web pages.'] "Wireless applications are limited by their small device screens and cumbersome input methods. Consequently, many users are frustrated in their attempts to use these devices. Speech can help overcome these problems, as it is the most natural way for humans to communicate. Speech technologies enable us to communicate with applications by using our voice... Both wireless and speech applications have their benefits but also their limitations. Multimodal technologies attempt to leverage their respective strengths while mitigating their weaknesses. Using multimodal technologies, users can interact with applications in a variety of ways. They can provide input through speech, keyboard, keypad, touch-screen or mouse and receive output in the form of audio, video, text, or graphics... The SALT forum is a group of vendors which is creating multimodal specifications. It was formed in 2001 by Cisco, Comverse, Intel, Microsoft, Philips and SpeechWorks. They created the first version of the Speech Application Language Tags (SALT) specification as a standard for developing multimodal applications. In July 2002, the SALT specification was contributed to the W3C's Multimodal Interaction Activity (MMI). W3C MMI has published a number of related drafts, which are available for public review. The main objective of SALT is to create a royalty-free, platform-independent standard for creating multimodal applications. A whitepaper published by SALT Forum further defines six design principles of SALT. (1) Clean integration of speech with web pages; (2) Separation of the speech interface from business logic and data; (3) Power and flexibility of programming model; (4) Reuse existing standards for grammar, speech output, and semantic results; (5) Support a range of devices; (6) Minimal cost of authoring across modes and devices. The first five principles above result in minimizing the cost of developing, deploying and executing SALT applications. A number of vendors, including HeyAnita, Intervoice, MayWeHelp.com, Microsoft, Philips, SandCherry and Kirusa, SpeechWorks, and VoiceWeb Solutions, have announce products, tools, and platforms that support SALT. There is also an open source project, OpenSALT, in the works to develop a SALT 1.0 compliant browser... Before diving into experimenting with HTML and SALT, we need to set up the appropriate development environment. I am going to use Microsoft's .NET Speech SDK 1.0..." General references in "Speech Application Language Tags (SALT)."

  • [May 16, 2003] "Using libxml in Python." By Uche Ogbuji. From XML.com (May 14, 2003). ['Uche Ogbuji introduces libxml and its Python bindings.'] "The GNOME project, an open source umbrella projects like Apache and KDE, has spawned several useful subprojects. A few years ago the increase of interest in XML processing in GNOME led to the development of a base XML processing library and, subsequently, an XSLT library, both of which are written in C, the foundational language of GNOME. These libraries, libxml and libxslt, are popular for users of C, but also those of the many other languages for which wrappers have been written, as well as language-agnostic users who want good command-line tools. libxml and libxslt are popular because of their speed, active development, and coverage of many XML specifications with close attention to conformance. They are also available on many platforms. Daniel Veillard is the lead developer of these libraries as well as their Python bindings. He participates on the XML-SIG and has pledged perpetual support for the Python bindings; however, as the documentation says, 'the Python interface [has] not yet reached the maturity of the C API.' In this article I'll introduce the Python libxml bindings, which I refer to as Python/libxml. In particular I introduce libxml2. I am using Red Hat 9.0 so installation was a simple matter of installing RPMs from the distribution disk or elsewhere. The two pertinent RPMs in my case are libxml2-2.5.4-1 and libxml2-python-2.5.4-1. The libxml web page offers installation instructions for users of other distributions or platforms, including Windows and Mac OS X... libxml offers a SAX API, both through the low-level API and and through the bundled drv_libxml2.py, a libxml driver for the SAX that comes with Python and PyXML. libxml supports W3C XML Schema, RELAX NG, OASIS catalogs, XInclude, XML Base, and more. There are also extensive features for manipulating XML documents. I hope to cover these other features of this rich library in subsequent articles..." General references in "XML and Python."

  • [May 16, 2003] "Interactive Web Applications with XQuery." By Ivelin Ivanov. From XML.com (May 14, 2003). ['Ivelin Ivanov on using XQuery to front-end Amazon web services with HTML.'] "In this installment of Practical XQuery I continue to discuss practical uses of XQuery, focusing this time on interactive web page generation. The sample application will display a list of books; based on user input, it will also display detailed information about the selected title. The example exercises an interesting capability of the QEXO XQuery implementation, which is open source software. It allows front-end web applications with custom look and feel to be written very easily, using business logic from remote web services. Amazon.com allows anyone to register and build a collection of favorite titles. The collection can be seen either directly via HTML pages on the web site or accessed via a REST-style web service. In the latter case the XML response contains information about book titles, book image icons, and unique book identifiers. The identifier can be used to access another web service offered by Amazon, which supplies book details, including price, user rating, and so on. To create the example application, I will write two XQuery programs. The first, favbooks.xql, will provide a custom view of my favorite books. The second, bookdetails.xql, will show the details of a chosen book. To get a taste for what is coming, you can play with this example application..." See the open source Qexo: The GNU Kawa implementation of XQuery. General references in "XML and Query Languages."

  • [May 16, 2003] "Getting Web Services Ready for Business." By Alan Kotok. From WebServices.org (May 12, 2003). ['Report from XML Europe 2003: documents good, interoperable documents better, secure interoperable documents best.'] "In a London bathed in bright sunshine, the XML Europe 2003 conference held 5-8 May 2003 offered its annual snapshot of the XML landscape in Europe, and with Web services as one of the hotter topics. As in most XML conferences put on by IDEAlliance (and its predecessor Graphic Communications Association), the program offered its fair share of technical discussions and publishing applications. Yet this year's event gave the unmistakable impression that XML in general and Web services in particular were ready for business or at least ready to talk business. But are Web services in 2003 ready to do business? Three presentations at XML Europe discussed various aspects of this question, looking at the emergence of new standards to make electronic business possible for millions of smaller enterprises, tools for building security into business Web services, and a difficult internal integration case study, with an elegant Web services solution... (1) Jon Bosak of Sun Microsystems in his opening keynote said the XML industry has finally started getting the ubiquitous and inexpensive tools for e-business promised since the early days of XML. Bosak co-chaired the W3C committee that developed the first XML specifications. He now chairs OASIS's Universal Business Language (UBL) committee... Up to now, said Bosak, exchanging electronic business documents meant using electronic data interchange (EDI), which may work efficiently but was often out of reach for smaller companies... But three recent developments are about to change the equation for small companies, according to Bosak, and in a big way. First, businesses will soon have a single (and royalty-free) tag set for business documents, provided by UBL. (2) For companies looking to make an impact with Web services, they can begin by applying Web services to the integration of their internal systems. The UN's Food and Agriculture Organization (FAO), as reported at XML Europe, faced an interoperability and business challenge probably matched by few companies or organizations. As described by John Chelsom of CSW Group, technology consultants to FAO, Web services not only made an interoperable solution possible, but an elegant solution at that. (3) Jorgen Thelin of Cape Clear Software discussed the concepts of identity and security in Web services. Thelin argued that in a Web services environment, security needs to be as interoperable as the business context in which it used and he talked about various Web services standards and specifications built for this purpose..."

  • [May 16, 2003] "How ASP.NET Web Services Work." By Aaron Skonnard (DevelopMentor). In Microsoft MSDN Library (May 2003). Covers Microsoft ASP.NET Web Services Methods, SOAP Messaging, XML Schema, and HTTP-based Web Services. ['Summary: See how Microsoft ASP.NET Web services methods (WebMethods) provide a high-productivity approach to building Web services. WebMethods can expose traditional Microsoft .NET methods as Web service operations that support HTTP, XML, XML Schema, SOAP, and WSDL. The WebMethod (.asmx) handler automatically dispatches incoming SOAP messages to the appropriate method and automatically serializes the incoming XML elements into corresponding .NET objects.'] "The ASP.NET WebMethods framework provides a high-productivity approach to building Web services. WebMethods make it possible to expose traditional .NET methods as Web service operations that support HTTP, XML, XML Schema, SOAP, and WSDL. The WebMethod (.asmx) handler automatically figures out how to dispatch incoming SOAP messages to the appropriate method, at which point it automatically serializes the incoming XML elements into corresponding .NET objects. And to simplify integrating clients, the .asmx handler also provides automatic support for generating both human-readable (HTML) and machine-readable (WSDL) documentation. Although the WebMethods framework can be somewhat restrictive compared to custom IHttpHandlers, it also provides a powerful extensibility model known as the SOAP extension framework. SOAP extensions allow you to introduce additional functionality beyond what we've discussed here to meet your precise needs. As an example, Microsoft released the Web Services Enhancements 1.0 for Microsoft .NET (WSE), which simply provides a SoapExtension class that introduces support for several GXA specifications to the WebMethods framework..."

  • [May 16, 2003] "Using the Zip Classes in the J# Class Libraries to Compress Files and Data with C#." By Ianier Munoz (Dokumenta). In Microsoft MSDN Magazine (June 2003). "Zip compression lets you save space and network bandwidth when storing files or sending them over the wire. In addition, you don't lose the directory structure of folders you Zip, which makes it a pretty useful compression scheme. The C# language doesn't have any classes that let you manipulate Zip files, but since .NET-targeted languages can share class implementations, and J# exposes classes in the java.util.zip namespace, you can get to those classes in your C# code. This article explains how to use the Microsoft J# class libraries to create an application in C# that compresses and decompresses Zip files. It also shows other unique parts of the J# runtime you can use from any .NET-compliant language to save some coding... The J# runtime includes many useful classes that you can use from other languages in the .NET Framework. Some of these classes allow you to handle Zip files, perform high-precision mathematical calculations, or call the Windows API. Although most of this functionality can be achieved by using third-party libraries, the J# runtime is fully supported by Microsoft, and it is free..."

  • [May 15, 2003] "The Fortune of the Commons. [Survey: The IT Industry.]" By [Staff]. In The Economist Newspaper (May 08, 2003). ['For the first time, the IT industry is widely adopting open standards -- thanks to the internet.'] "Network effects explain why the IT industry in the 1980s already started to move away from completely proprietary technology, the hallmark of the mainframe era. Microsoft, in particular, figured out how to strengthen feedback loops by encouraging other software firms to develop applications for its operating system. This kind of openness made Windows a standard, but users were still locked in... Now it seems that, thanks to the internet, the IT industry has entered a positive feedback loop in favour of open standards... The emergence of web services has concentrated minds wonderfully on developing open standards. Displaying an unprecedented degree of co-operation, the computer industry is developing a host of common technical rules that define these new kinds of online offerings. Hence the proliferation of new computer-related acronyms such as XML, SOAP, UDDI, WSDL and so on. To be sure, standardising web services is not always easy. As standardisation moves into more complex areas, such as security and the co-ordination of different offerings, consensus seems to be harder to achieve. Incumbents in particular have started to play games to give their wares an advantage. They are also trying to lock in customers by adding proprietary extensions to the standards mix. Most worrying, however, is the possibility that software firms will have to pay if they implement web-services standards. Most standards bodies currently allow firms to continue owning the intellectual property they contribute as long as they do not charge for it. But the more involved that standards for web services become, the greater the pressure that firms should be able to charge for the use of the patents they have invested in. Smaller web-services firms have already started ringing the alarm bells. The IT industry is at a crossroads, says Eric Newcomer, chief technology officer of Iona Technologies. One road leads to a truly standardised world in which companies are able to reap all the benefits of web services. The other road 'leads back to yesteryear, where proprietary systems ruled the day'. The controversy points to a more general problem with technology standards: where to draw the line between the IT commons and the areas where firms should compete with proprietary technology. If the commons area is too large, there might not be enough incentive to innovate. If it is too small, incompatibilities could keep web services from becoming a standard way for computer systems to communicate... the long run, says Ken Krechmer, a telecommunications-standards expert, information technology itself will help to reconcile standardisation and innovation, because it will increasingly turn standards into 'etiquettes'... Faxes already work this way. Before transmitting anything, they negotiate over the speed at which they want to communicate. The extensible markup language (XML), the lingua franca underlying most web-services standards, also enables etiquettes. If the computer systems of two companies want to exchange the XML document for an order, they can first come to a common understanding of what the file's information means. Etiquettes thus allow for proprietary innovation while ensuring compatibility..."

  • [May 15, 2003] "Timed Text (TT) Authoring Format 1.0 Use Cases and Requirements." Edited by Glenn Adams (Extensible Formatting Systems, Inc). W3C Working Draft 15-May-2003. Latest version URL: http://www.w3.org/TR/tt-af-1-0-req/. Produced by members of the W3C Timed Text (TT) Working Group as part of the W3C Synchronized Multimedia Activity. "This document specifies usage scenarios and requirements for a timed text authoring format. A timed text authoring format is a content type that represents timed text media for the purpose of interchange among authoring systems. Timed text is textual information that is intrinsically or extrinsically associated with timing information. A principal motivation for the development of a common authoring format for timed text is the lack of a standard content format that supports the representation and interchange of textual information which is synchronized with other media elements or which serves as a synchronization master itself. Popular proprietary multimedia systems and their corresponding player components have defined distinct timed text formats for each proprietary use. As a consequence there is no common authoring interchange format that serves as a portable interchange format between such systems. A goal of the present work is to define such a portable interchange format to ease the burden of authoring tool developers and users as well as enhance portability of timed text content. A side effect of the development and deployment of a common timed text authoring format is that it simplifies the creation and distribution of synchronized text for use with a multitude of devices, such as multimedia players, caption, subtitle, and teletext encoders and decoders, character generators, LED displays, and other text display devices..."

  • [May 15, 2003] "Bosak on Universal Business Language (UBL)." By Simon St.Laurent. From xmlhack.com (May 15, 2003). "At last week's XML Europe, Jon Bosak, the 'father of XML', confessed that 'yes, I have visions' as he explained how he hoped XML might help in 'saving the world', leveling the playing field of global commerce by lowering the cost of doing business. Noting that the 'social agenda of SGML has always been about creator ownership of content,' with vendor, platform, and language neutrality at its core - Bosak now wants to take that social agenda and apply it in a much larger context. While documents and data are often considered separate territories, Bosak emphasized that the two work together: 'Business is built on the concept of standard legally binding documents.' Documents are crucial not only as information exchange, but as a key means of keeping humans in the computing loop. Bosak's vision of social change is as much about human business structures as technical ones. Bosak noted that while global integration is a common theme of advertisements for companies selling computer or transportation services, the reality is quite different. Companies that do business on a global scale (and their intermediaries) prefer to use lower-cost EDI transactions, but the initial costs of joining these systems are substantial, keeping out many possible participants... Bridging the gap between the dreams promoted in the advertisements and the reality is difficult, but Bosak sees XML as a key component. Bosak suggests replacing traditional EDI with a multi-layer package, built on standards at all levels: (1) transport -- the Internet; (2) a document-centric architecture -- XML; (3) royalty-free XML B2B tag set -- UBL; (4) royalty-free B2B infrastructure -- ebXML; (5) royalty-free office productivity format -- OpenOffice; (6) open-source software. Bosak described the combination of open source software and open standards as critical to making this project feasible..." General references in "Universal Business Language (UBL)."

  • [May 15, 2003] "Add XML Functionality to Your Flash Movies." By Jean-Luc David. From (May 15, 2003). "In support of the XML standard, Macromedia has added XML functionality to the Flash Player. Why would you want to load XML data into Flash? There are several advantages. First, Flash has the unique ability to process XML on the client side on almost any platform. Typically, most XML transformations are handled on the server side because browser support for XML is sporadic at best. Second, Flash can seamlessly combine XML data with cool animation and sound. The XML object also extends the functionality of Flash. URL-encoded query string variables are traditionally used to bring data into a Flash movie via the Load Variables function... Most browsers are limited to a header size of approximately 256 characters (including query string data). The XML object has no such limitation. This makes it an ideal method for bringing database content into your Flash movies. On top of that, the XML object allows you to import and integrate any XML-formatted data available on the Web into Flash. The Flash XML object gives you all the tools necessary to bring in, parse, manipulate, and export XML-formatted data. The XML object API is well documented and available on the Macromedia Web site..."

  • [May 14, 2003] "Style Stylesheets to Extend XSLT, Part 2. Improving the Trace Generator." By Joseph Kesselman (Advisory Scientist, IBM). From IBM developerWorks, XML zone. ['In Part 1 of the series, Joe demonstrated the basics of using an XSLT stylesheet to enhance another stylesheet. In this installment he develops a more polished version, making the trace generator more detailed, more selective, and more controllable -- and as a bonus, he includes a reusable XPath generator template.'] "My previous article demonstrated the concept of using a stylesheet to compile new features into another stylesheet. Specifically, I showed you how to write a simple execution tracing tool, which automatically modifies a stylesheet so it will generate comments in the output document as it runs, showing which parts of the latter were produced by each template. However, I ended the article by pointing out that the basic version I'd developed was quite limited, and suggesting a number of ways in which it could be improved. In this installment I'll add some of those missing features, and turn this proof-of-concept into a much more useful tool..." With source code. For related resources, see "Extensible Stylesheet Language (XSL/XSLT)."

  • [May 14, 2003] "A Uniform Resource Name (URN) Namespace for the Web3D Consortium (Web3D)." By Aaron E. Walsh (Mantis Development Corp), WWW. IETF Network Working Group. Request for Comments: #3541. May 2003. Category: Informational. "Web3D is a non-profit organization with a mandate to develop and promote open standards to enable 3D for the Internet, Web and broadcast applications. Web3D is responsible for developing, advancing, and maintaining the VRML97 ISO/IEC International Standard (ISO/IEC 14772-1:1997), X3D (the forthcoming official successor to VRML) and related technologies. Web3D maintains and extends its standardization activities through the coordinated efforts of dedicated Working Groups, Task Groups, and professional Liaisons to related standards bodies such as ISO, W3C, and MPEG... Web3D would like to assign unique, permanent, location-independent names based on URNs for some resources it produces or manages... This document describes a Uniform Resource Name (URN) namespace for the Web3D Consortium (Web3D) for naming persistent resources such as technical documents and specifications, Virtual Reality Modeling Language (VRML) and Extensible 3D (X3D) files and resources, Extensible Markup Language (XML) Document Type Definitions (DTDs), XML Schemas, namespaces, style sheets, media assets, and other resources produced or managed by Web3D." Examples [for pedagogical reasons only]: 'urn:web3d:vrml97:node:GeoCoordinate'; 'urn:web3d:vrml97:node:NurbsCurve'; 'urn:web3d:x3d:schemas:2002:compact'; 'urn:web3d:x3d:schemas:2002:compromise'; 'urn:web3d:media:textures:nature:grass'. [cache]

  • [May 14, 2003] "Requirements and Design for Voucher Trading System (VTS).". By Ko Fujimura (NTT Corporation) and Donald E. Eastlake 3rd (Motorola). IETF Network Working Group. Request for Comments: #3506. Category: Informational. March 2003. "Crediting loyalty points and collecting digital coupons or gift certificates are common functions in purchasing and trading transactions. These activities can be generalized using the concept of a 'voucher', which is a digital representation of the right to claim goods or services. This document presents a Voucher Trading System (VTS) that circulates vouchers securely and its terminology; it lists design principles and requirements for VTS and the Generic Voucher Language (GVL), with which diverse types of vouchers can be described... To achieve consistency with other related standards shown below, the syntax of the language MUST be based on XML... The language specification should be consistent with the following specifications: (1) Internet Open Trading Protocol v1.0; (2) XML-Signature; (3) Extensible Markup Language (XML) Recommendation; (4) ECML Version 2." See related discussion in the news item "IETF Internet Open Trading Protocol Working Group Publishes RFC for Voucher Trading System." [cache]

  • [May 13, 2003] "Thinking XML: The Commons of Creativity. How Machine-Readable Licenses Can Foster Creative Output and Exchange." By Uche Ogbuji (Principal Consultant, Fourthought, Inc). From IBM developerWorks, XML Zone. May 12, 2003. ['Many artists independent of big media concerns seek to collaborate with others and make their work more widely available. They are often willing to offer less restrictive contractual terms than those that consumers have recently been forced to accept. Creative Commons, which Uche Ogbuji introduces in this article, seeks to address this need by providing a way to express copyright license terms that are both human-readable and machine-readable. The machine-readable form uses RDF and thus makes available the network effects that have been covered throughout this column.'] "From independent filmmakers, musicians, and authors to the open source and Weblog communities, there has long been a push to advertise and distribute material on the Internet, and usually at much lower cost and less restrictive terms than more commercial content. This push has had aspects of politics, economics, and technical problem-solving, raising questions like the following: Can the wider spread of less restricted content put pressure on policymakers and big media companies to lessen restrictions on consumers' rights? Would this pressure increase if the variety of rights and restrictions on content were better communicated to consumers? Would independent producers be able to reach a wider audience, and thus earn a better living, if they had a better way to advertise their output and perhaps attract business by declaring less restrictive licenses? Will a technical network of machine-readable metadata help to improve the tools and communications channels between producers and consumers of content? To answer these questions and others, a coalition of independent content producers, technologists, and lawyers banded together last year under the umbrella of the Creative Commons project to produce technology for machine-readable copyright licenses. This project employs RDF to describe the various rights and restrictions for a particular license using a vocabulary that was developed by the project's legal wizards. This enabling technology is intended to make it easy for people to find content to share and collaborate on in a peer-to-peer environment. As Creative Commons folk like to say, it allows producers and consumers to skip the intermediaries. In the common scenario, my favorite indie band signs on to a small music label, which then signs a distribution deal with a large label, which then distributes CDs to a large music retailer, which I then visit to make the purchase. I pay a price premium for each layer in that intermediary chain, and I am forced to swallow usage terms that are set by very expensive lawyers who do not have my interests in mind, nor usually those of the original producer. In the Creative Commons world, I go online to see the sort of music I like and who may be producing music that I can enjoy on what I consider fair terms. I can then make my commerce directly with the producer. In this article, I shall introduce the data formats used by Creative Commons... Creative Commons is perhaps the most impressive embodiment of the ideas and potential behind RDF and related technologies. The vocabulary is anchored in authority developed by a team of lawyers -- and lawyers, of course, are the profession most associated with nailing down semantics. More important, the project is showing remarkable growth and providing clear benefits to a very broad variety of artists, developers, lawyers, and consumers..." See: (1) "Creative Commons Project"; (2) W3C website "Resource Description Framework (RDF)"; (3) local references in "Resource Description Framework (RDF)."

  • [May 13, 2003] "Dynamic Web Services." By Atul Saini (Fiorano Software). In EAI Journal Volume 5, Number 4 (April 2003), pages 17-19. The new enterprise architecture is one that is service-based, implemented on top of a fully distributed, peer-to-peer, event-driven system. A services operating platform allows enterprises to create a unified computing infrastructure... An SOP [Services Operating Platform] is a pure infrastructure platform that allows implementation of software services of any kind, including Web services. It differs from existing Web services implementations in that it requires a fully distributed peer-to-peer, message-enabled infrastructure instead of a centralized request/reply infrastructure. In the architecture of a typical SOP node, a daemon based on this architecture runs on each network node, with all the daemons being connected via a message bus. An SOP provides built-in support for event-based messaging, transparent process routing, remote monitoring, tracing and logging, scheduling, presence and availability, security and remote deployment. An SOP facilitates the composition, deployment, and modification of distributed enterprise applications using visual tools. A powerful advantage of an SOP is that it provides a single cohesive platform for a company's EAI, B2B integration, BPM, collaboration, and general distributed computing requirements... Existing approaches to the EAI, BPM, and related problems of enterprises don't fully satisfy the underlying infrastructure requirements these problems demand. Service-based architectures, implemented on top of a fully distributed, peer-to-peer, event-driven system, bring a whole new level of generality to the enterprise platform space. SOPs implemented this way address the needs of BPM at the application and human interaction levels. It addresses these needs within and across enterprises, and across multiple domains, including some areas that have traditionally been served by distinct solutions. An SOP thus allows an enterprise to create a unified computing infrastructure for all its distributed computing requirements, saving time and money, and increasing process efficiencies across the board..."

  • [May 13, 2003] "Web Services Security, Part 3." By Bilal Siddiqui. From O'Reilly WebServices.xml.com (May 13, 2003). "In the first article of this series, I explained why traditional network firewalls are inadequate to provide security to web service applications, which is why we need to implement web service security at the XML messaging layer. In the second article, I discussed signed and encrypted XML messages and a B2B scenario to elaborate the application of XML signature and encryption in web services. At the end of the second article, I introduced Web Services Security (WSS) and explained how WSS applies XML signature and XML encryption to SOAP messages. I also introduced the concept of security tokens and demonstrated the use of digital certificates as security tokens in WSS messages. In this article I discuss XML-based authentication and the sharing of authentication information across different applications, known as Single Sign-On (SSO). The Security Assertions Markup Language (SAML, often pronounced 'sam-ull') from OASIS helps reach this goal by wrapping authentication information in an XML format... For our purposes, authentication means verifying the identity of a user. When you check your e-mail, you enter your username and password to get authenticated. It is assumed that you have kept your password confidential. Therefore the knowledge of your password is used to make sure that you are the one who is trying to check your email. [An X509 certificate can be] a security token (just like a password) that the recipient of the WSS message can use in order to authenticate the user before allowing specially discounted rates for booking. A security token is presented to a gatekeeper in order for a user to get authenticated. Now imagine that the gatekeeper is guarding the main gate of a large building with many offices. Visitors are required to show their ID cards and get authenticated at the main gate. The gatekeeper checks the ID card by matching it with his internal record and then allows the visitor to enter the building... A possible solution to allow sharing of authentication information is to issue a temporary identification badge to a visitor at the main gate of the building. The gatekeeper at the main gate will issue a badge to each visitor after successful authentication. The identification badge will have a short expiry. The visitor will show the identification badge while entering each office. The office gatekeeper will check the validity of badge before allowing or disallowing a person to enter the office. Such scenarios are common in Enterprise Application Integration and B2B applications. Whether applications are running within or across the boundaries of an enterprise, the sharing of authentication information forms an important part of application integration effort. Naturally, the sharing of authentication information prevents each application from having to perform the entire authentication process... ... The next article of this series will put the pieces together and demonstrate various possibilities of using them to accomplish the goal of securing web services..." General references in "Security Assertion Markup Language (SAML)."

  • [May 13, 2003] "Using Python, Jython, and Lucene to Search Outlook Email." By Jon Udell. From O'Reilly WebServices.xml.com (May 13, 2003). "Last fall, I wrote about ZOË, an innovative (and free) Java-based local web service that indexes your email in a client-independent fashion, building usefully cross-linked views of messages, attachments, and addresses. I've used it with Outlook on Windows and with Mail and Mozilla on Mac OS X. But I haven't managed to integrate it with the fairly elaborate foldering and filtering that I do in Outlook. As the comments on the ZOË article indicate, there are lots of ways to search email. Several people mentioned Lotus Notes, which indeed has always had an excellent indexed search feature. Nobody mentioned Evolution, which does too. But until Chandler becomes dogfood, Outlook seems likely to remain my primary personal information manager. I like it pretty well as a PIM, actually, and I like it a whole lot more since I started using Mark Hammond's amazing SpamBayes Outlook addin. The problem with using ZOË for search is that it sees my messages before my Outlook filters can move them into folders and before SpamBayes can separate the ham from the spam. Outlook itself, of course, is shockingly bad at search. I never thought I'd find myself digging around in my Outlook message store, but Mark's SpamBayes addin -- which is written in Python -- turns out to be a great Python/MAPI tutorial. Borrowing heavily from his examples, I came up with a script to extract my Outlook mail to a bunch of files that I could feed to a standalone indexer. It relies on the standard MAPI support in PythonWin, and also on the mapi_driver module included with SpamBayes..." See also Jython Essentials, by Samuele Pedroni and Noel Rappin.

  • [May 13, 2003] "Auto Industry Group to Upgrade Supply Chain With XML." By Todd R. Weiss. In ComputerWorld (May 13, 2003). "Auto industry e-services exchange Covisint has launched an IT consortium that aims to replace existing Electronic Data Interchange (EDI) messaging with more flexible -- and less costly -- XML systems. In an announcement yesterday, Southfield, Mich.-based Covisint said the industry consortium includes automakers DaimlerChrysler AG, Ford Motor Co. and General Motors Corp., and parts and technology vendors Johnson Controls Inc., Lear Corp. and Delphi Corp. Covisint spokesman Paul Manns said the group is being launched to help find cheaper and more efficient ways for automakers to communicate directly with vendors about parts availability, engineering concerns and other issues. For years, the auto industry has relied on complex and antiquated EDI technologies for inter-company communications, he said, which companies have been hesitant to abandon because of past investments. The new system, which the consortium hopes to have in place by the end of the year, will use XML and a combination of off-the-shelf products and custom coding, Manns said. 'It's a well recognized problem,' he said of the aging EDI infrastructure. 'It's a tremendous headache in the industry' in terms of controlling costs and keeping in touch. The expected savings from the new XML-based system ranges between 30% and 50%, he said... The new system will also allow Covisint customers to send and receive electronic data to applications behind each other's firewalls, as well as to Covisint applications. It will support the latest Internet-based protocols, XML standard messages and will be compatible with the millions of EDI messages exchanged daily in the automotive business..." See details in the announcement: "Covisint and Consortium of Automotive Leaders to Develop Data Messaging Solution. Product Will Be a Cost Effective Alternative to EDI."

  • [May 13, 2003] "Auto Consortium Aims For Lower-Cost Data Messaging." By Rick Whiting. In InformationWeek (May 12, 2003). ['Covisint and a group of major automakers and suppliers are preparing to test a system based on XML and other standards as an alternative to EDI.'] "Covisint LLC, the online exchange for the auto industry, and a consortium of the largest automakers and their suppliers are developing a data-messaging system based on XML and other standards that Covisint says will be a cheaper alternative to costly EDI systems. The project was unveiled Monday [2003-05-12], although Covisint executives say development work has been under way since early this year. Covisint expects to pilot-test the new trading hub this summer, and the six companies assisting with the project are scheduled to begin using it for high-volume messaging in November. The system will be expanded to second- and third-tier suppliers next year, says Bob Paul, Covisint's sales and marketing executive VP. DaimlerChrysler, Delphi, Ford, General Motors, Johnson Controls, and Lear are contributing personnel and an undisclosed amount of money to the effort..." See details in the announcement: "Covisint and Consortium of Automotive Leaders to Develop Data Messaging Solution. Product Will Be a Cost Effective Alternative to EDI."

  • [May 13, 2003] "Transparently Cache XSL Transformations with JAXP. Boost Performance and Retain Usability by Implementing Implicit Caching Inside Transformer Factories." By Alexey Valikov. In JavaWorld (May 02, 2003). ['When using XSLT (Extensible Stylesheet Language Transformations) in Web environments, where numerous concurrent threads repeatedly use transformations, implementing a stylesheet cache drastically boosts performance. However, using a stylesheet cache on top of normal JAXP (Java API for XML Parsing) is often not convenient or suitable for pure JAXP users. This article introduces the idea of pushing cache functionality inside transformer factory implementations, making cache usage absolutely transparent.'] "No doubt, XSLT (Extensible Stylesheet Language Transformations) is a powerful technology that has many applications in the XML world. In particular, numerous Web developers can take advantage of XSLT at the presentation layer to gain convenience and flexibility. However, the price of these advantages is higher memory and CPU load, which makes developers more attentive to optimization and caching techniques when using XSLT. Caching is even more important in Web environments, where numerous threads share stylesheets. In these cases, proper transformation caching proves vital for performance. A usual recommendation when using the Java API for XML Processing (JAXP) is to load transformations into a Templates object and then use this object to produce a Transformer rather than instantiate a Transformer object directly from the factory. That way, a Templates object may be reused to produce more transformers later and save time on stylesheet parsing and compilation... [However,] although this technique positively influences performance... it is not convenient for the developer... you must care about observing the date of the last stylesheet modification, reloading outdated transformations, providing safe and efficient multithreaded access to the stylesheet cache, and many other small details. Even a natural move -- encapsulating all the required functionality into a standalone transformer cache implementation -- will not save a developer from third-party modules, which use standard JAXP routines without any caching... There is, however, a simple and elegant solution to this problem. As long as JAXP allows us to replace a used implementation of the TransformerFactory, why don't we simply write a factory that would have intrinsic caching capabilities? [With this factory implementation] you have one less headache: you no longer have to worry about loading, caching, and reloading stylesheets. You are guaranteed that third-party libraries that use standard JAXP will use caching as well. You can be sure of no concurrent cache access conflicts, and the cache will not be a bottleneck. There are, however, several disadvantages... First, this factory caches only those stylesheets loaded from files. The reason is because, while we can easily check the timestamp of the file's last modification, this is not always possible for other sources. Another problem remains with stylesheets that import or include other stylesheets. Modification of the imported or included stylesheet will not let the main stylesheet reload. Finally, extending an existing factory implementation binds you to a certain XSLT processor (unless you write a caching extension for every factory you might use). Happily, in most cases, these issues are not crucial, and we can take advantage of factory-based caching: transparency, convenience, and performance..." See: (1) Sun website Java API for XML Processing (JAXP); (2) local references in "Java API for XML Parsing (JAXP)."

  • [May 13, 2003] "Jini Starter Kit 2.0 Tightens Jini's Security Framework. Introducing Jini's New Security Features." By Frank Sommers. In JavaWorld (May 09, 2003). ['Security for distributed systems based on mobile Java code is the theme of Sun Microsystems' new Jini Starter Kit, JSK 2.0. JSK 2.0 incorporates three new specifications: a new programming model and infrastructure for Jini services, a new implementation of Java RMI (Remote Method Invocation), and several changes to existing Jini tools and utilities.'] "Java's creators have often highlighted the JVM's three biggest benefits: automatic garbage collection, built-in security, and the ability to load classes from any network location. Automatic garbage collection is now taken for granted in most modern programming environments. However, network-mobile classloading and the Java security infrastructure have evolved on separate, parallel paths. As a result, completely satisfying the security needs of distributed systems based on mobile Java code has thus far been impossible. Jini's security model extends both Java's mobile code and security features, and combines their strengths, enabling secure distributed computing with mobile code... Perhaps the best news about the Jini security framework is that it requires few changes to existing Jini services. Instead, it defines security as a deployment-time option. If you have an existing Jini service, you can take advantage of utilities in the new JSK to deploy that service securely. Jini security, therefore, shares that similarity with J2EE (Java 2 Platform, Enterprise Edition) security -- it primarily belongs to the domains of system architecture and administration. The basic unit of a Jini service remains the service proxy you register in lookup services. When clients discover that proxy -- based on the interfaces the proxy implements -- code for the proxy object not already available at the client downloads into the client following RMI (Remote Method Invocation) mobile code semantics. Following that, the client invokes methods on the proxy. As a client, you don't care or know how the proxy implements those methods: some proxy methods may involve network communication with a remote service implementation, whereas others might execute locally to the proxy. The Jini security model adds three steps to that simple programming model..." See also the Davis Project at Jini.org. The Davis project "is an effort by the Jini technology project team at Sun Microsystems to provide the new programming models and infrastructure needed for a Jini technology security architecture. The Davis project has produced a Beta release containing the new facilities, integrated with the services and utilities that will make up the 2.0 release of the Jini Technology Starter Kitfrom Sun, and three Proposals covering the associated specifications, submitted to the Jini Community for approval under the Jini Community Decision Process (JDP)..."

  • [May 13, 2003] "XML Features of Oracle 8i and 9i." By Simon Hume. From WebReference.com (May 10, 2003). ['Database specialist Simon Hume chips in with an introduction to the XML features of the most recent Oracle database versions.'] "XML and relational databases are both technologies for structuring, cataloguing and processing data. If data has a regular and atomic structure, it is more appropriate and efficient to use a database than XML. In this case, why would you wish to go to the trouble of converting such data from a database into XML and vice versa? Reasons include [...] On the other hand text documents, which are usually irregularly structured, cannot be effectively stored in a relational database. In this case, you can only store them in the database as BLOBs, which cannot be searched or processed in the normal way. However, there are cases where this is still desirable: very large repositories of pre-existing XML/SGML documents. The database can be used for cataloguing and searching, and documents once extracted can be processed further using XSL/XSLT. So there are two possibilities: perform backwards and forwards conversion of data between the database and XML, or store complete XML documents inside the database. Oracle terminology calls the former 'generated XML', and the latter 'authored XML'. And there are essentially two reasons for using XML and databases together: The first Oracle calls 'content and document management' where the requirement is for data to be presented differently as required, the second 'B2B messaging' where XML is used to communicate between different applications/sites/companies (e.g., EDI)... The XML tools provided with Oracle are mainly to simplify conversion between the database and XML, and a standard XML parser for the usual XSL/XSL-T/DOM/SAX operations..."

  • [May 12, 2003] "Business Process with BPEL4WS: Learning BPEL4WS, Part 7. Adding Correlation and Fault Handling to a Process." By Rania Khalaf and William A. Nagy (Software Engineers, IBM TJ Watson Research Center). From IBM developerWorks, Web services. April 22, 2003. ['In the previous article we examined correlation and fault handling in BPEL4WS. Now, we are going to extend the simple BPEL4WS process that we have been working with in the previous articles by adding the ability to communicate with a pre-existing process instance and to capture faults which may occur during its execution.'] "Having explained correlation and fault handling in BPEL4WS in the previous column of this series, we now add these capabilities to the loan approval sample we have been working with to illustrate their use. In the scenario shown here, if a client is approved for a loan he may send a request to actually obtain that loan. Correlation is used to make sure that the second request goes to the correct process instance. Fault handling will be added to catch an explicitly thrown error that signals that the client is trying to obtain a loan for an amount larger than the one that was approved. If such a fault is caught, a reply is sent back describing the problem. We also show the effect of error handling with nested scopes and its effects on the links that leave a scope that has handled its error, as well as one that was not able to do so and had to rethrow the fault to its enclosing scope... In the next article in this [BPEL4WS] series, we will look at the switch and pick activities, and at compensation..." ['Note that this articles refers to version 1.0 of the BPEL4WS specification. The latest version, BPEL4WS 1.1, is now available, and an article describing the key differences between the two specifications will be available shortly.'] Also in PDF format. See: (1) BPEL4WS 1.1 2003-05-05; (2) the news item "OASIS Forms Web Services Business Process Execution Language TC (WSBPEL)."

  • [May 12, 2003] "DB Updates Ease Web Services." By Lisa Vaas. In eWEEK (May 12, 2003). "Best-of-breed XML database developers Ipedo Inc., Sonic Software Corp. and Sleepycat Software Inc. are enhancing their respective native XML database software to make it easier for enterprises to use and manage XML data in Web services environments... Ipedo, for example, late this month will release Version 3.3 of its XML Information Hub, which boasts three new components. The first, called content conversion, automatically converts PDF, Microsoft Corp. Word and other non-XML documents into XML. The auto-organization component organizes, merges and transforms inbound content according to business rules, said Ipedo officials, in Redwood City, Calif. The third new piece, a universal XML Query engine, provides local and remote content and data source searching and updating using the XQuery standard. Some of the Ipedo upgrade's new features are compelling for user Thor Anderson, who is manager of program development at Collegis Inc. Anderson is working with Texas A&M University to take online digital library resources and put them into a repository with additional, educationally specific metadata. 'The more that [XQuery] engine is improved and sped up and usable, that's important,' said Anderson... Separately, Sonic late this summer will roll out a suite of integration products, called Sonic Business Integration Suite, that includes Sonic XML Server, a renamed and enhanced version of the Excelon XIS (Extensible Information Server) native XML database that the company acquired last fall. Enhancements to the XML database include a Web services-style interface laid over the XML processing and storage engine within XIS, said Sonic officials, in Bedford, Mass... Sleepycat next month will release Version 4.2 of Berkeley DB, its open-source embedded database..." See: "Ipedo Enhancements Boost Award-Winning XML Information Hub. Content Conversion, Auto-Organization, Universal XQuery, Web Services Views Reduce Cost and Complexity of Information Delivery." General references in "XML and Databases."

  • [May 12, 2003] "Berkeley DB XML: An Embedded XML Database." By Paul Ford. From XML.com (May 07, 2003). ['Paul Ford introduces the embeddable Berkeley DB XML database. For many years the open source Berkeley DB libraries have been a popular choice for embedded database applications. It has been so ubiquitously used that chances are, you rely on some software product that embeds Berkeley DB. It is therefore pretty exciting when SleepyCat, the maintainers of Berkeley DB, announce that they will be releasing an XML-aware version of their database software.'] "Berkeley DB XML is an open source, embedded XML database created by Sleepycat Software. It's built on top of Berkeley DB, a 'key-value' database which provides record storage and transaction management. Unlike relational databases, which store data in relational tables, Berkeley DB XML is designed to store arbitrary trees of XML data. These can then be matched and retrieved, either as complete documents or as fragments, via the XML query language XPath. Berkeley DB XML is written in C++, APIs for Berkeley DB XML exist for C/C++, Java, Perl, Python, and TCL, and more languages interfaces are currently under development... An XML database has several advantages over key-value, relational, and object-oriented databases: (1) XML data is dropped straight into the database; it does not need to be manipulated or extracted from a document in order to be stored. (2) When inserted into the database, most (in Berkeley DB XML, all) aspects of an XML document, including white space, are maintained exactly. (3) Queries return XML documents or fragments, which means that the hierarchical structure of XML information is maintained... Berkeley DB XML, even in beta, is a promising solution for XML storage and retrieval. According to [John] Merrells, it is being evaluated by "several serious commercial enterprises." Based on Berkeley DB, it has an well-proven foundation for data storage, and SleepyCat's prior releases have proven them to be a reliable provider of well-documented open source tools for data storage. SleepyCat allows for commercial licensing of their open source tools, which may make this solution attractive for corporations that are skittish about open source. It is also worth noting that Berkeley DB XML users essentially get Berkeley DB "for free" with the product. In other words, it's easy to mix and match regular DB data sources with XML data sources. This combination may provide a strong alternative to relational and object-oriented databases... Since any data storage technology requires a significant investment in time and effort, this strong level of community and corporate support is encouraging; Berkeley DB XML, currently in its infancy, seems likely to be around for a long time, and by offering a standard embedded interface it may provide a very useful tool for programmers in need of robust data storage who want to avoid the overhead of a relational database. The tool has some growing to do, but even in its current form many programmers will find it a useful tool with a logical, powerful interface..." General references in "XML and Databases."

  • [May 12, 2003] "XSLT 2 and Delimited Lists." By Bob DuCharme. From XML.com (May 07, 2003). ['The author looks at the upcoming new version of XSLT, XSLT 2.0, and in particular, its support for tokenizing strings. Using the development branch of the popular XSLT engine Saxon, Bob shows how an SVG polygon path can be processed with XSLT 2.0.'] "As part of his work as the editor of the XSLT 2.0 specification, Michael Kay has been prototyping the new features of XSLT 2.0 and XPath 2.0 in a separate development branch of his well-known Saxon XSLT processor. As I write this, his most recent prototype release is 7.4. His recommended stable implementation of XSLT 1.0 is at release 6.5.2; see the project homepage for details on the progress of these two branches. Release 7.4 lets us play with many of XSLT 2.0's new features... The XSLT 2.0 specification is still a Working Draft, so you don't want to build production code around it, but it's still fun to try out some of the new features offered by the next generation of XSLT and XPath. In the next few columns, I'll look at some of these features. Most functions have been separated into their own specification, separate from the XPath 2.0 spec, because they're shared with XQuery: XQuery 1.0 and XPath 2.0 Functions and Operators... The combination of tokenize(), item-at(), and index-of() let you take advantage of something that's always been around in XML 1.0, but that you couldn't do much with before: attributes of type NMTOKENS. You could always declare an attribute to be of this type and then store multiple values in it separated by spaces, but splitting up these lists required either the Perl split function, its equivalent in another language, or lots of code to split it up when using a language that didn't offer such a function, like XSLT 1.0. Now a single function can split it for us, another can check the list for a particular value, and another can pull out a particular item from the list based on its order in the list. "

  • [May 12, 2003] "Internationalizing the URI." By Kendall Grant Clark. From XML.com (May 07, 2003). ['It appears that development of the core XML specification itself is held up by the need to incorporate Internationalized Resource Identifiers, which are destined to supsersede URIs. Kendall Clark also comments on the recent release of new XQuery specifications from the W3C.'] "... As Paul Grosso said at the end of April, the progress of the XML 1.1 and Namespaces 1.1 recommendations may be slowed, if not stopped altogether, because of issues raised by the future of URIs. That is to say, because the future, in the form of IRIs, isn't here yet. The W3C's Technical Architecture Group has been unable to reach consensus on its Issue 27, which asks whether, when, and how to integrate IRIs into the core recommendations of the Web. One of the problems is that IRIs aren't finished yet, and it's notoriously tricky to rely on a formal concept or standard which, in some strict sense, doesn't yet exist... an IRI is an Internationalized (hereafter, I18N) Resource Identifier, which is like a URI, only different. According to RFC 2396 ('Uniform Resource Identifiers (URI): Generic Syntax'), a URI 'is a compact string of characters for identifying an abstract or physical resource', and by 'string of characters', it means 'string of characters formed by a subset of US-ASCII'; or, as the latest IRI Internet Draft puts it, 'a sequence of characters chosen from a limited subset of the repertoire of US-ASCII characters'... Whatever the eventual outcome of the IRI design and standardization process, there's no perfectly obvious strategy for dealing with the time delay. One could argue that, given the relative lateness of the internalization of URIs, the W3C and the IETF should hurry or should take their time. Rick Jelliffe suggests that the IETF in particular is playing catch up: 'IETF has had a problem coming to grips with non-ASCII characters in protocols. Internationalized domain names are at least five years too late. Ultimately, IETF has to go UTF-8 throughout'..."

  • [May 12, 2003] "Implementation Strategies for WS-ReliableMessaging. How WS-ReliableMessaging Can Interact With Other Middleware Communications Systems." By Christopher Ferris, John Ibbotson, and Tony Storey (IBM). From IBM developerWorks, Web services. ['This paper discusses considerations for realizing robustness, integrity, and performance required for reliable messaging implementations using the recently released WS-ReliableMessaging specification and describes the role of message oriented middleware to address these.'] "The goal of reliable messaging in Web services is to allow applications to send and receive messages simply, reliably, and efficiently even in the face of application, platform, or network failure. The WS-ReliableMessaging specification, recently published by BEA, IBM, Microsoft, and Tibco defines a protocol and a set of mechanisms that allow developers of Web services to ensure that messages are delivered reliably between two endpoints, and supports a broad spectrum of delivery assurance and robustness characteristics. However, the WS-ReliableMessaging specification alone does not address a number of important considerations that must be taken into account to realize levels of robustness, integrity, and performance that are often required by applications. This paper discusses some of these considerations and suggests middleware as the most likely and often most suitable implementation strategy for the WS-ReliableMessaging protocol. ... We believe that WS-ReliableMessaging represents the next step in bringing Web services to the enterprise. It provides an open protocol allowing enterprise messaging infrastructures to communicate with business partners in a reliable and recoverable manner. The WS-ReliableMessaging protocol will complement message-oriented middleware offerings and we look forward to working together with others as we take the next steps in delivering on the promise of interoperable application integration through Web services technologies..." See: "New Web Services Specifications for Reliable Messaging and Addressing."

  • [May 07, 2003] "Address Data Content Standard Public Review Draft." Subcommittee on Cultural and Demographic Data, [US] Federal Geographic Data Committee (FGDC). Version 2. April 17, 2003. 41 pages. "Addresses provide a means of locating people, structures and other spatial objects. More specifically, addresses are used to reference and uniquely identify particular points of interest, to access and deliver to specific locations, and as a means for positioning geographic data based on location. Most organizations maintain address lists or have databases or datasets that contain addresses. In many organizations, the primary purpose for creating and maintaining address lists and address information is mail delivery. Organizations often have detailed specifications about the structure of their address information without defining the content, i.e., the elements that constitute an address within their system. Knowledge of both structure and content is required to successfully share information in a digital environment. The purpose of this standard is to facilitate the exchange of address information. The Address Data Content Standard (the Standard) simplifies the address data exchange process by providing a method for documenting the content of address information... The objective of the Standard is to provide a method for documenting the content of address information. As a data usability standard, the Standard describes a way to express the content, applicability, data quality and accuracy of a dataset or data element. The Standard additionally codifies some commonly used discrete units of address information, referred to as descriptive elements. It provides standardized terminology and definitions to alleviate inconsistencies in the use of descriptive elements and to simplify the documentation process. The Standard establishes the requirements for documenting the content of addresses. It is applicable to addresses of entities having a spatial component. The Standard does not apply to addresses of entities lacking a spatial component and specifically excludes electronic addresses, such as e-mail addresses. The Standard is to be used only in the exchange of addresses. The Standard places no requirement on internal organization of use or structure of address data. However, the principles of the Standard can be extended to all addresses, including addresses maintained within an organization, even if they are not shared..." See: (1) details in the announcement "U.S. Federal Geographic Data Committee (FGDC) Draft Address Data Content Standard for Public Review"; (2) "Draft Proposal for a National Spatial Data Infrastructure Standards Project"; (3) general references in "Markup Languages for Names and Addresses." [source PDF, also in HTML and .DOC format]

  • [May 07, 2003] "Registering Web Services in an ebXML Registry." By Joseph M. Chiusano (Booz Allen Hamilton) and Farrukh Najmi (Sun Microsystems). Prepared for the OASIS ebXML Registry Technical Committee. Technical Note. Version 1.0. 12-March-2003, updated May 7, 2003. 16 pages. A posting from TC Chair Kathryn Breininger (Boeing Library Services) declares that this document is an approved Technical Note. "This document describes the current best practice for registering Web services in an ebXML Registry. It conforms to the following specifications: [1] OASIS/ebXML Registry Information Model (ebRIM) v3.0, release pending; [2] OASIS/ebXML Registry Services Specification (ebRS), v3.0, release pending... An ebXML Registry provides a stable store where information submitted by a Submitting Organization is made persistent. Such information is used to facilitate ebXML-based Business to Business (B2B) partnerships and transactions. Submitted content may be XML schema and documents, process descriptions, Web services, ebXML Core Components, context descriptions, UML models, information about parties and even software components. The purpose of this document is to provide a Best Practice for registering Web services and their associated entities in an ebXML Registry... The characteristics, capabilities, and requirements of a Web service can be described and published, thereby enabling automatic discovery and invocation of the service. One mechanism by which these descriptions can be published is in an ebXML Registry. The most common mechanism for describing Web services today is the Web Services Description Language, or WSDL; however, the Service description that is registered can be in any format such as OASIS/ebXML Collaboration-Protocol Profile and Agreement (CPP/A) or the emerging DAML-S. A Web service can be represented in an ebXML Registry through several Registry Information Model classes: Service, ServiceBinding, and SpecificationLink. Service instances are RegistryEntry instances that provide information on services (e.g. Web services)... ServiceBinding instances are RegistryObject instances that represent technical information on a specific way to access a specific interface offered by a Service instance. A Service has a collection of ServiceBindings... A SpecificationLink provides the linkage between a ServiceBinding and one of its technical specifications that describes how to use the service with that ServiceBinding. For example, a ServiceBinding may have a specific SpecificationLink instance that describes how to access the service using a technical specification (such as a WSDL document)..." XML Schemas and sample XML instances are provided for these classes. Extended scenarios are discussed in section 5; they include Versioning of Web Services, Associating a Web Service with an Organization, Associating a Web Service with an Access Control Policy, Registering a Service Description that is External to the Registry, Web Service Redirection, and Customizing Metadata Using Slots. General references in "Electronic Business XML Initiative (ebXML)." [source .DOC]

  • [May 07, 2003] "W3C Cleans Up SOAP 1.2 Specification. Latest Version Awaits Final Review." By Scarlet Pruitt. In InfoWorld (May 07, 2003). "The World Wide Web Consortium (W3C) released the proposed recommendation for the SOAP (Simple Object Access Protocol) 1.2 specification Wednesday, saying that the protocol is cleaned up and in a strong position for final review. The group called the latest version of the XML (Extensible Markup Language) protocol for exchanging structured information in a decentralized environment, such as the Web, 'lightweight' and flexible. Over 400 issues have been resolved, including 150 from SOAP 1.1, according to the group, and the protocol has now been sent on to the W3C membership for final review, which closes June 7. SOAP 1.2 consists of the SOAP 1.2 Messaging Framework, SOAP 1.2 Adjuncts, and a Primer. The Messaging Framework provides a message construct, a processing model and an extensibility framework, whereas the Adjuncts defines a set of adjuncts such as rules for encoding SOAP messages. Additionally, the cleaned up protocol integrates core XML technologies, the group said, and works with W3C XML schemas... A spokeswoman for the group in France added that SOAP 1.2 is 'in good standing,' and that it has already been implemented for 7 projects..." See details and references in the news story "W3C Publishes SOAP Version 1.2 as a Proposed Recommendation." General references in "Simple Object Access Protocol (SOAP)."

  • [May 07, 2003] "W3C Brushes Up SOAP Standard." By Martin LaMonica. In CNET News.com (May 07, 2003). "Standards body the World Wide Web Consortium said Wednesday that it is close to finalizing an upgrade to an important Web services protocol called SOAP. SOAP (Simple Object Access Protocol) acts as a transport mechanism to send data between applications or from applications to people. SOAP, along with Extensible Markup Language (XML) and the Web Services Description Language (WSDL), are considered to be the foundation of Web services, a series of standards that makes it easier to share information between disparate systems. Software companies incorporate the latest Web services standards into their products to ensure that applications have an agreed-upon method for sending data within a company or between business partners. The World Wide Web Consortium (W3C) said it has a proposed recommendation for SOAP 1.2, which puts this version in prime position to become an official standard... Version 1.2 of SOAP includes enhancements designed to simplify Web services development with SOAP toolkits, the W3C said. SOAP 1.2 introduces a 'processing model' that allows a developer to establish rules on how SOAP messages are handled. It also has a number of XML-oriented enhancements that will make it easier to manipulate data formatted as XML documents, the W3C said... 'Starting today, developers who may have hesitated to pick up SOAP 1.2 should take a look,' Tim Berners-Lee, director of the W3C, said in a statement. The W3C stewards a number of critical Web services standards, including XML, SOAP and WSDL..." See details and references in the news story "W3C Publishes SOAP Version 1.2 as a Proposed Recommendation." General references in "Simple Object Access Protocol (SOAP)."

  • [May 06, 2003] "XML Key Management (XKMS 2.0) Requirements." Edited by Frederick Hirsch (Nokia) and Mike Just (Treasury Board of Canada Secretariat, TBS). W3C Note 05-May-2003. Version URL: http://www.w3.org/TR/2003/NOTE-xkms2-req-20030505 Latest version URL: http://www.w3.org/TR/xkms2-req. Previous version URL: http://www.w3.org/TR/2002/WD-xkms2-req-20020318. "This document lists the design principles, scope and requirements for XML Key Management specifications and trust server key management implementations. It includes requirements as they relate to the key management syntax, processing, security and coordination with other standards activities... XML-based public key management should be designed to meet two general goals. The first is to support a simple client's ability to make use of sophisticated key management functionality. This simple client is not concerned with the details of the infrastructure required to support the public key management but may choose to work with X.509 certificates if able to manage the details . The second goal is to provide public key management support to XML applications that is consistent with the XML architectural approach. In particular, it is a goal of XML key management to support the public key management requirements of XML Encryption, XML Digital Signature, and to be consistent with the Security Assertion Markup Language (SAML). This specification provides requirements for XML key management consistent with these goals, including requirements on the XML key management specification, server implementations and the protocol messages. XML key management services will primarily be of interest to clients that intend to communicate using XML-based protocols bound to SOAP. It may be that such clients will not have sufficient ASN.1 capabilities in which case the benefits of offloading the parsing of certificates and other traditional PKI structures (e.g., CRLs or OCSP responses) is clear. Clients which possess such capabilities and have no preference to work with XML-based protocols may opt to use non-XML-based protocols defined by PKIX, for example..." See: (1) the news story of April 21, 2003: "Last Call Working Drafts for W3C XML Key Management Specifications (XKMS)"; (2) general references in "XML Key Management Specification (XKMS)."

  • [May 06, 2003] "Style Stylesheets to Extend XSLT, Part 1. Use XSLT as a Macro Preprocessor." By Joseph Kesselman (Advisory Scientist, IBM). From IBM developerWorks, XML zone. May 6, 2003. ['XSLT isn't just about styling documents for presentation. It's actually a very general-purpose document transformation processor. And as Joe demonstrates in this two-part series, stylesheets are themselves documents, so XSLT can be used as a portable preprocessor to automatically enhance the behavior of a stylesheet.'] Joe's note to the XSL list (xsl-list@lists.mulberrytech.com): "The first installment of my article on using XSLT stylesheets to annotate/enhance other XSLT stylesheets just went live on developerWorks. This part's largely motivation and context-setting, though it does introduce the basic tricks which are needed to generate new stylesheet behaviors. The second part (already written, just needs final polishing) will start with this limited prototype and polish it into a more robust and useful tool." From the introduction: "As one of the contributors to Apache's open-source Xalan processor, I've been impressed by the wide range of applications folks are finding for XSLT. Stylesheets have established themselves as a very general-purpose tool, not just for rendering XML documents for display, but for automatically generating new documents... XSLT does have the concept of extensions, which provide an architected way to enhance the stylesheet language. With the extensions, Xalan developers can provide some additional features without conflicting with the standard. But we really can't afford to build every requested extension directly into the processor; we'd wind up with a huge library of infrequently-used features. Xalan does let you write and plug in your own extensions. But extensions are usually limited to defining new stylesheet operations rather than altering existing ones, and usually require that someone write code in a traditional programming language. Future versions of XSLT may let you write extensions in the XSLT language. Also, user-written extensions aren't supported by all XSLT processors, and the details of writing, accessing, and invoking them vary, so this isn't a very portable solution. For example, an extension written for the Java-based Xalan-J processor can't be invoked from the C++ version, Xalan-C, nor vice versa... In this pair of articles, I'll show you another way to enhance XSLT stylesheets, which can do some things extensions can't and which will work in any XSLT processor: write a stylesheet that compiles custom features into other stylesheets! Essentially, we can leverage the fact that an XSLT stylesheet is itself an XML document and automatically apply a set of modifications to add or modify its behavior..." With source files.

  • [May 06, 2003] "Arbortext 5 Leapfrogs Competitors." By [Seybold Bulletin Staff]. In The Bulletin: Seybold News and Views On Electronic Publishing Volume 8, Number 31 (May 7, 2003). "At its user-group meeting this week, Arbortext previewed Arbortext 5, a new version of its XML editing and cross-media publishing sofware. The new version updates Arbortext's Epic editor and E3 publishing engine and introduces three new products, two of which break new ground in the XML publishing arena. In addition, the upgrade pushes the envelope in its support for the forthcoming version of Microsoft Word. Arbortext opted to pre-announce Arbortext 5 well before it was ready in order to discuss with customers how it relates to the forthcoming version of Microsoft Word. Most of Arbortext 5 won't be readily available to customers until early 2004. The three new products include a browser-based XML editor called Contribute; a style-sheet design tool called Styler; and a database application for managing links and dynamic document assembly... The most ambitious new component of Arbortext 5 is its Dynamic Content Assembly Manager (DCAM), an Oracle-based link manager that will operate as an optional add-on to E3. The purpose of DCAM is to manage and resolve cross-references within the context of complex documents assembled dynamically from small, XML-tagged components. Arbortext will provide an authoring interface for creating and managing the links, and it is working with some of its leading users to define the product's UI and functionality. A unique aspect of this product is its support for multiple linking syntaxes. Customers will be able to use SGML or XML (Xlink, Ulink, etc.), and Arbortext says DCAM will even be able to apply transforms when links are resolved... [The announcement] hints at the influence Microsoft's entrance into the true XML editing market will have: forcing the small cadre of vendors that once had this market to themselves to reposition and adapt their products, and, in Arbortext's case, forcing it to reveal its product plans well before its next release is ready for the field... As the leading supplier of professional XML authoring tools, Arbortext hopes to leapfrog its competitors in its next release. In offering schema support and a browser-based editor, it may be playing catch-up, but its Styler and DCAM products go beyond the competitive offerings on the market..." See also the announcement "Arbortext 5 Simplifies Multichannel Publishing for Dynamic Content. New Products Reduce Hidden Costs and Inefficiencies, Improve Content Creation and Publishing."

  • [May 06, 2003] "Corel Introduces Smart Graphics Studio." By [Seybold Bulletin Staff]. In The Bulletin: Seybold News and Views On Electronic Publishing Volume 8, Number 31 (May 7, 2003). "Corel announced last week the release of its Smart Graphics Studio, a suite of server and client applications for building dynamic vector graphics using the Scalable Vector Graphics (SVG) standard. The studio is an outgrowth of technology Corel purchased two years ago when it acquired Micrografx and is part of the company's attempt to introduce a line of enterprise-oriented products... The developer tools enable a Web developer with XML expertise to turn a base SVG graphic into a template for dynamic graphics driven by variable data. Studio is a developer product intended to help companies reduce the costs and time associated with building systems capable of delivering real-time information in graphical form. Among the target applications are graphical representations of complex machinery and mapping. Alternative vector-graphics formats, such as CGM, can be used to build dynamic graphics, but product manager Rob Williamson pointed out that CGM is more difficult to script, and can't be manipulated with standard XML-processing tools. For those who do have CGM graphics, Corel offers tools that perform real-time CGM-to-SVG conversion... The Smart Graphics Studio shows some of the potential that SVG holds for graphic presentation of complex data on the Web. In complex industrial applications, the fact that SVG is not yet a popular graphics format is far outweighed by its technical advantages, and Smart Graphics Studio exploits those advantages by providing a strong set of tools for building dynamic-graphic applications driven from XML-based data sources. Despite Adobe's (and Corel's) best efforts, SVG has not displaced Flash as the mainstream Web vector-graphics format..." See references in the news story "Corel Smart Graphics Studio Uses SVG for Graphical Applications Development."

  • [May 06, 2003] "XML Certification Quizzer. Validating Your XML Skills." By Joel Amoussou (XMLMentor.Net). In XML Journal Volume 4, Issue 5 (May 2003). "This column has two objectives. The first is to help you prepare for IBM Test 141 on XML and related technologies. The second is to help you learn XML by offering tips for designing and optimizing XML solutions... Why should you get XML certified? There are three simple answers to that question. First, exam preparation brings discipline and rigor to the process of learning XML and the very large collection of related specifications. Second, there is no doubt that being XML certified will make you more competitive in today's tight IT job market. For example, some government organizations make certification a prerequisite for obtaining grants and contracts for new projects. If you already have a job, certification could result in a promotion, salary increase, or better job security. Finally, the ultimate goal of learning is performance. Preparing for the exam and becoming certified will arm you with the knowledge and skills you need to architect and implement the right solution for your next XML project... The format of the [IBM Test 141] exam is multiple choice... The IBM XML certification is not product specific. It covers W3C specifications and other vendor-neutral specifications like SAX and UDDI. The knowledge and skills acquired during exam preparation can be used with a variety of products and in different XML project scenarios... The IBM exam contains approximately 57 questions. The objectives are grouped into five different categories: (1) Architecture: This objective represents about 19% of the exam questions. Some questions are scenario-based. Based on various business and technical considerations you will be asked to select the appropriate technologies for an XML-based solution. The choices will include XML Schema, DTDs, XSLT, CSS, DOM, SAX, Namespaces, and others. You should understand the roles of SOAP, WSDL, and UDDI in Web services architecture and how these three technologies complement each other and can be used together... (2) Information Modeling: This objective represents about 26% of the exam questions. You will be asked to translate modeling requirements into DTD and XML Schema constructs. A strong knowledge of the syntax of DTDs and XML Schemas is a requirement. This exam objective also covers the proper use of Namespaces, XLink, and XPointer. (3) XML Processing: This objective represents about 33% of the exam questions. You should familiarize yourself with DOM and SAX interfaces and the methods defined by those interfaces. You should be able to write correct XPath expressions and functions to select nodes in an XML document... (4) XML Rendering: This objective represents about 11% of the exam questions. It covers the use of CSS, XSLT, and XSL Formatting Objects specifications in rendering XML data... (5) Testing and Tuning: This objective represents about 11% of the exam questions. You will be tested on how to optimize XML solutions using various techniques like stylesheet and schema modularization, uniqueness and referential constraints, and other constructs..." [alt URL]

  • [May 06, 2003] "Introduction to LBXML Operator." By Bing Li (Arizona State University). In XML Journal Volume 4, Issue 5 (May 2003). With source code. "LBXML Operator is a Java API-based XML tool that supports insertion, modification, searching, and removal on XML files... The major difference between LBXML Operator and existing Java-based XML tools is the approach to specifying manipulation conditions. Most existing tools define such conditions using XQuery, XPath, or their own script languages. However, LBXML Operator depicts conditions through APIs, i.e., a particular API handles a specific case to manipulate XML files. One major advantage to specifying manipulation conditions using XQuery, XPath, or script languages is that those descriptions have standard specifications and cover all situations of XML searching - basic query, range query, Max/Mix, and so on. LBXML Operator searching includes basic query only. Furthermore, existing tools search XML based not only on tags but also on attributes. Searching with the latest version of LBXML Operator is based on tags only... LBXML Operator has two important features. First, LBXML Operator regards XML files as hierarchical structure-based plain text files. Each XML file consists of a series of similarly structured sub-XML files and has its own keys. This architecture looks like that of a relational database. Based on this understanding, LBXML Operator provides a collection of APIs that manipulate XML just like SQL statements for tables in databases. The second important feature is the goal of LBXML Operator -- to provide convenient XML tools for programmers that are compatible with a particular programming language, i.e., Java. Programmers can specify operation conditions through Java data structures, manipulate XML with Java APIs, and get results in Java data structures... Tag and value, updating tag, parent tag, key tag, and sibling tag -- all these are defined when processing XML files using LBXML Operator. They describe conditions that determine which tag's value should be accessed in an XML file, working like navigators to help LBXML Operator find the appropriate tag and then operate on it. I've been using LBXML Operator for more than a year, and it has worked well. One big advantage is that the tool is based totally on APIs, so developers don't have to study particular XML query languages such as XQuery and other query scripts. Through these APIs, developers specify operation conditions according to XML structures. Furthermore, XML operations are regarded much like SQL statements, which makes the tool more acceptable since SQL statements have been widely used for some time. Some APIs are simple -- they don't provide parameters to figure out tags. Some methods are complex since a particular tag is accessed through rich parameters. In general, simple methods are used to operate many tags' values at once. Complex methods are fit for accessing the value of a particular tag. Although it's possible to design more complicated APIs, these methods can deal with most cases in XML operations..." [alt URL]

  • [May 06, 2003] "Database-Driven Charting Using XSLT and SVG. Controlling Finer Details of Data-Driven Scalable Vector Graphics." By Avinash Moharil and Rajesh Zade. In XML Journal Volume 4, Issue 5 (May 2003). With source code. "Scalable Vector Graphics is a markup language for describing two-dimensional graphics in XML. It is a language for presenting rich graphical content, and it allows you to create graphics from XML data using XSLT. Most of the modern devices are raster-oriented, so it comes down to where graphics are rasterized -- at the client or server level. SVG is rasterized at the client level, giving more flexibility for presenting graphics. SVG gives the user full control over the rasterization process. SVG documents define graphics as vector graphics rather than a bitmap so you can scale and zoom in and out without losing any detail or blurring the image. SVG uses a 'painters model' of rendering, meaning that paint is applied in successive operations to the output device such that each operation paints over some area of the output device. When the area overlaps a previously painted area, the new paint partially or completely obscures the old... With the help of XSLT, XML data can be transformed into SVG graphics. SVG drawings can be dynamic and interactive, which gives tremendous flexibility when building data-dependent graphics such as charts. The Document Object Model (DOM) for SVG, which includes the full XML DOM, allows for straightforward and efficient vector graphics animation. A set of event handlers such as OnMouseover and OnClick can be assigned to any SVG graphical object. Because of the compatibility of SVG with other technologies, features like scripting can be built on SVG elements and other XML elements from different namespaces simultaneously within the same Web page... SVG supports the ability to change vector graphics over time. Animation has been very popular in electronic media. SVG's animation elements were developed in collaboration with the W3C Synchronized Multimedia (SYMM) Working Group, developers of the Synchronized Multimedia Integration Language (SMIL) 1.0 Specification... SVG can also be used in XSL-FO (Formatting Objects) to use graphics effectively in print media... The effective use of XSLT and SVG can help build dynamic and scalable graphics for data presentation. These technologies can be used to present raw XML data in more visual forms like charts and graphs for Web and print media. Since XML documents can be easily generated based on enterprise data, SVG becomes a truly dynamic graphic based on enterprise data..." General references in "W3C Scalable Vector Graphics (SVG)." [alt URL]

  • [May 06, 2003] "Statements on Demand Using XSL-FO. Online Presentation that Offers Better Quality." By Douglas Lovell (IBM T. J. Watson Research Center). In XML Journal Volume 4, Issue 5 (May 2003). "The problem with printing locally is that statements produced as HTML don't print well. The page breaks don't occur in the right place: table footers appear at the top of the page following where they should appear, and lines of text split from one page to the next. The edges of the reports get truncated on the printer such that all of the account detail on the right-hand side is lost. A much better format for presenting statements online, from the perspective of print, is PDF, Adobe's document format that has been widely adopted for typeset-quality presentation on the Web. Implementations of the XSL standards make it relatively simple to produce online account statements on demand as PDFs, with quality equaling that of statements that are printed and sent via mail. The benefit to vendors is that they may produce statements using the same technology for print and for online delivery. IBM now sells a product for producing AFP, a print format that drives the high-speed printers typically used to print statements for mass mailing, from FO. Using a single transform for both online and print presentation reduces development and maintenance costs and ensures consistency. Customers benefit by receiving PDF documents that they can store for later review or print with quality approaching that of statements they're accustomed to receiving in the mail. This article demonstrates the capability for generating statements online as PDF by implementing a real-world example. The example statement is an investor account summary from an IRA. As they used to say on 'Perry Mason,' the names have been changed to protect the innocent. XSL provides a potent vehicle for presenting statements and reports online and on demand. It enables separation of the report query logic from the presentation logic. It provides a unified mechanism for presenting the statements online and in print form. The online presentation provided by XSL-FO surpasses the quality enabled by HTML and equals that afforded by printed copies delivered in the mail, which should help customers feel more comfortable about forgoing mailed statements for those available online..." [alt URL]

  • [May 06, 2003] "What's New in Windows Server 2003." By Shawn Wildermuth. From O'Reilly ONDotnet.com (May 05, 2003). "I wasn't convinced that Microsoft would ever get it done, but they've finally released Windows Server 2003. Sure, they did a ninth-inning renaming of the operating system from Windows .NET Server to Windows Server 2003. But there are still many features that the .NET developer should be salivating over. In this article, I will count down the top ten features that you should know about... Active Directory Application Mode: Though not officially part of Windows Server 2003, Active Directory Application Mode (ADAM) represents a better way for developers to use directory stores. In ADAM, you can install an Active Directory instance that is not tied to a domain controller. No longer are you required to intermingle the IT department's Active Directory instance with your application data... XML-Based IIS 6.0 Metabase: Gone are the days of having to use convoluted APIs to add virtual directories and sites to the IIS Metabase. The new IIS 6.0 Metabase is just an XML file. In addition, the new metabase can be set so that manual edits to the XML file are automatically reflected in the running instance of IIS 6.0..." See, from the MSDN Library: "What is the Metabase?" - "The metabase is a hierarchical store of configuration information and schema that are used to configure IIS. The metabase configuration and schema for IIS 4.0 and IIS 5.0 were stored in a binary file, which was not easily readable or editable. IIS 6.0 replaces the single binary file (MetaBase.bin), with plain text, Extensible Markup Language (XML)-formatted files named MetaBase.xml and MBSchema.xml... Only users who are members of the Administrators group can view and modify these files... In physical terms, the metabase is a combination of the MetaBase.xml and MBSchema.xml files and the in-memory metabase. The IIS configuration information is stored in the MetaBase.xml file, while the metabase schema is stored in the MBSchema.xml file. When IIS is started, these files are read by the storage layer and then written to the in-memory metabase through Admin Base Objects (ABO)... The metabase configuration file can be easily read and edited using common text editors, such as Microsoft Notepad. Editing the metabase configuration file directly is primarily for administrators who do not want to use scripts or code to administer IIS. Also, editing the metabase configuration file directly is faster than using IIS Manager when administering IIS over a slow network connection..."

  • [May 06, 2003] "Orchestration Promise Meets Reality. [Tools and Technologies.]" By Richard Adhikari. In Application Development Trends Volume 10, Number 5 (May, 2003), pages 46-50. Note: this article appears to have been written before the WSBPEL announcement. ['Business process flows face mounting roadblocks as standards fights escalate; but emerging tools and business demands could force a resolution.'] "Major vendors such as IBM are throwing their weight behind this concept of orchestrating services to create applications, but the majority of players are small, fast-moving software companies... The concept of orchestration applies to a meta-view of services, explained Ismael Ghalimi, co-founder and chief strategy officer at San Mateo, Calif.-based Intalio Inc. 'A service to manage and confirm purchase orders, for example, is where the notion of orchestration comes in, not a service to exchange the purchase orders,' he said. All types of services -- SOAP, WSDL, UDDI, CORBA, IIOP, RMI and RPC -- can be orchestrated... BPML was created by BPMI.org, the Business Process Management Initiative, two years ago to define a standard language for modeling business processes... BPML provides an abstracted execution model for collaborative and transactional business processes, and splits e-business processes into two parts: A common public interface and private implementations. As in cryptography, with its private and public keys, the public implementation is open in that it can be described as ebXML business processes, RosettaNet Partner Interface Processes, or pretty much anything else, independently of the private implementations, which are particular to each end-user corporation. For IBM there are three key standards in service choreography, said the firm's Van Overtveldt. One is BPEL4WS. The second is Web Service Transactions, or WS-TX, which lets users define an interaction between a Web service requester application and a Web service provider application as being transactional. This helps confirm that a request is executed. 'All Web services are message-based, but that doesn't help you to be sure that a request was executed, or that there was a coinciding request that caused both to fail,' Van Overtveldt said. 'The messaging protocol sits at Level 5 in the OSI 7-layer model, while the transactional aspect sits at the application level. So, when you have a programmer declare an application request as a transaction, you want confirmation at the application level, not at the message level.' WS-TX is conceptually an extension to the SOAP standard, he noted. The third standard is Web Services Coordination, or WS-C. 'If you have a Web service request or application that interacts with multiple providers, you need transactional integrity across that entire interaction,' said Van Overtveldt. The Web service request can coordinate those transactions through implementing WS-C. If a corporation's customer goes onto its Web site and changes their address, for example, the corporation's Web service can go across its CRM, ERP and other databases and update them automatically through a WS-C implementation..."

  • [May 06, 2003] "XML Group Cooks Up World Wide Database." By Paul Festa. In CNET News.com (May 06, 2003). "The leading Web standards group has released ten draft XML specifications intended to make the Web perform more like a database. The World Wide Web Consortium (W3C) published updates to a group of interlinking specifications that recommend uniform ways to retrieve information from XML (Extensible Markup Language) documents. The publications include two 'last-call' drafts and a brand new one. The updates were published last week in the run-up to the XML Europe 2003 Conference & Exposition, which opened Monday in London. The show brings together representatives from high-tech companies, from standards bodies and from users' groups interested in XML... The W3C recommends XML for structuring data, and the task of making XML behave more like a relational database falls to the organization's XML Query working group. 'How do you make traditional database languages like SQL (Structured Query Language) work with XML?' asked W3C spokeswoman Janet Daly in an interview with CNET News.com. 'The XML Query working group has been putting together a framework of documents that provide the technical answer to that question, so that XML documents can start to look like parts of one massive database.' Members of the W3C's XML Query working group include Microsoft, Oracle, IBM and DataDirect. The ten drafts address various related W3C projects, including XML Query (XQuery), which establishes how to search XML documents; XML Path Language (XPath), which shows how to label discrete parts of an XML document; and Extensible Stylesheet Language Transformations (XSLT), which allows for the translation of one kind of XML document into another, or into a non-XML document..." See additional references in the news item "W3C Releases Ten Working Drafts for XQuery, XSLT, and XPath."

  • [May 06, 2003] "Microsoft Office Word 2003 XML: Memo Styles Sample." By Frank C. Rice (Microsoft Corporation). From Microsoft MSDN Library (April 2003). ['Microsoft Office Word 2003 has added a number of features related to working with XML. These features are integrated in new task panes, menu options, and additions to the object model. In this article, we will examine some of these features from the perspective of the user interface and programmatically.'] "Microsoft Office Word 2003 includes several new XML-related features and improvements to existing features. For example, Word now supports a XML file format called Word XML that support round-tripping. Round-tripping means that the XML that is exported by Word can also be imported and understood by Word. Microsoft Office 2003 supports a new Word XML file format that can be easily read, transformed, and manipulated by XML tools. Word 2003 will also make it easy for you to attach a schema to a document and then assign elements from the schema to parts of the document... A number of new properties, methods and objects have also been added to the Word Visual Basic for Applications (VBA) object model to account for the new XML features and to help make Word the XML authoring platform of choice. The object model is derived from the existing widely accepted standard: the document object model (DOM), as implemented by Microsoft XML Core Services (MSXML) Version 5.0, so that existing DOM programmers will find the Word environment familiar and professionally designed. In this article, we will look at working with XML in Word 2003. We will do this by examining some common XML-related tasks using the menus, task panes, and other parts of the user interface. We will also look at performing many of these same tasks programmatically. We examine the features both from the perspective of the user interface and programmatically. We will use a plain memo template, XML file, and an XML Schema Document (XSD) file, all of which are available as a download from this article..." Covers: Attaching the Schema; Displaying the XML Structure Pane; Add an Element to the Document; The XML Structure Pane; Adding Additional Elements; Saving the XML Document; Saving as Word XML; Performing Schema Validation; Setting Up a Template; Protecting the Document; Opening the Saved XML Document; Adding XSL Transforms to the Schema Library. See: "Microsoft Office 11 and InfoPath [XDocs]."

  • [May 06, 2003] "Interoperability Strategy: Concepts, Challenges, and Recommendations." By Industry Advisory Council (IAC) Enterprise Architecture SIG. Concept Level White Paper Developed for the Federal Enterprise Architecture Program Management Office (FEA-PMO). April 03, 2003. 31 pages. "The purpose of this paper is to provide some background on the issues underlying the interoperability challenges, to shed some light on potential approaches to dealing with the problem, and to offer some specific recommendations, based on industry experience, that government at all levels can implement to rapidly address this challenge. The Industry Advisory Council (IAC) brings an industry perspective to the issues facing government and offers solutions that have succeeded in commercial settings that may be useful in addressing the issues facing government. Most of the underpinnings for interoperability come from the field of communication. Successful communication relies on three principles: Common Syntax (the structure of the message) Common mechanism, Common Semantics (the meaning of something). XML is becoming a common standard syntax, alternative to ASN.1 and other syntaxes. The common mechanism for communicating between systems has become TCP, supporting higher level communication mechanisms such as HTTP over TCP/IP. But, having a common syntax and common mechanism is not enough. XML is not enough. Interoperability requires that the systems have a common definition of what is to be shared or communicated. We need an infrastructure to support semantic alignment. In Appendix E we show a Homeland Security example which leverages the work at OASIS; Organization for the Advancement of Structured Information Standards, on Content Assembly Mechanism (CAM) which uses XML templating to allow users to construct information exchanges... In short, much of the e-Government movement is the evolution from static, undocumented, rigid stovepipe systems to dynamic metadata-driven and navigated agile business lines comprised of reusable components residing in a Service-Oriented Architecture (SOA). The SOA allows the redeploying of legacy applications as XML-encapsulated, trusted components and solutions with native XML logic providing the encapsulation and componentization. The move to e-Government has improved on all government levels; Federal, State and Local. Citizens will increase their usage of online interaction with the government inline with IT investment. This investment, particularly in the areas of interoperability will result in significant taxpayer savings despite the challenge of changing work practices and political wrangling..." Note: (1) Appendix C: 'Business-Centric Methodology', and see (2) "OASIS Forms Business-Centric Methodology Technical Committee." [cache]

  • [May 06, 2003] "Business Process Execution Language for Web Services. [BPEL4WS.]" By Tony Andrews (Microsoft), Francisco Curbera (IBM), Hitesh Dholakia (Siebel Systems), Yaron Goland (BEA), Johannes Klein (Microsoft), Frank Leymann (IBM), Kevin Liu (SAP), Dieter Roller (IBM), Doug Smith (Siebel Systems), Satish Thatte (Microsoft - Editor), Ivana Trickovic (SAP), and Sanjiva Weerawarana (IBM). Version 1.1. 05-May-2003. 136 pages. APPENDIX D (pages 116-136) supplies three XML schemas: BPEL4WS Schema, Partner Link Type Schema, and Message Properties Schema. Copyright (c) 2002, 2003 BEA Systems, International Business Machines Corporation, Microsoft Corporation, SAP AG, and Siebel Systems. Updates the version 1.1 release of March 31, 2003. "This document defines a notation for specifying business process behavior based on Web Services. This notation is called Business Process Execution Language for Web Services (abbreviated to BPEL4WS in the rest of this document). Processes in BPEL4WS export and import functionality by using Web Service interfaces exclusively. Business processes can be described in two ways. Executable business processes model actual behavior of a participant in a business interaction. Business protocols, in contrast, use process descriptions that specify the mutually visible message exchange behavior of each of the parties involved in the protocol, without revealing their internal behavior. The process descriptions for business protocols are called abstract processes. BPEL4WS is meant to be used to model the behavior of both executable and abstract processes. BPEL4WS provides a language for the formal specification of business processes and business interaction protocols. By doing so, it extends the Web Services interaction model and enables it to support business transactions. BPEL4WS defines an interoperable integration model that should facilitate the expansion of automated process integration in both the intra-corporate and the business-to-business spaces..." Note: This version of the BPEL4WS specification was made available by OASIS WSBPEL Technical committee co-chairs Diane Jordan and John Evdemon in a 2003-05-06 posting by Diane Jordan to the wsbpel@lists.oasis-open.org mailing list [Subject: Information for the OASIS Open WS BPEL Technical Committee]. The version is identified as "a copy of the final version of the BPEL4WS V1.1 specification which the authors plan to submit at the first meeting [of the WSBPEL TC] on May 16, [2003]; please note that this version will not be active on the authors' websites until May 12, [2003]..." The document's copyright notice reads (in part): "BEA, IBM, Microsoft, SAP AG and Siebel Systems (collectively, the 'Authors') agree to grant you a royalty-free license, under reasonable, non-discriminatory terms and conditions, to patents that they deem necessary to implement the Business Process Execution Language for Web Services Specification." See: "OASIS Forms Web Services Business Process Execution Language TC (WSBPEL)."

  • [May 06, 2003] "Sun Joins Rivals' Web Services Program." By Gavin Clarke. From Computer Business Review Online (May 06, 2003). "... Sun told ComputerWire it is joining the Web Services Business Execution Process Language (WSBPEL) technical committee and plans to attend the group's kick-off meeting on May 16, 2003. The technical committee plans a standard based on Business Execution Process Language for Web Services (BPEL4WS)... Sun's decision comes as it also emerged that vendors driving a separate World Wide Web Consortium (W3C) web services choreography initiative, WS-Choreography, have extended an invitation to meet with representatives of the WSBPEL technical committee, organized by Organization for the Advancement of Structured Information Standards (OASIS). Martin Chapman, WS-Choreography chairman, has invited IBM and Microsoft to attend his group's next meeting in June, to talk about ways for the two to work together. Chapman believes the groups can explore requirements in underlying specifications such as Web Services Description Language (WSDL) 1.2... Chapman's invitation comes as vendors driving WS-Choreography increasingly participate in the WSBPEL technical committee. Seven vendors participating in WS-Choreography have signed-up to WSBPEL -- BEA Systems Inc, EDS Crop, Intalio Inc, Novell Inc, Oracle Corp, Tibco Software Inc, SAP AG, and SeeBeyond Technology Corp... Oracle, which has called for unity in web services choreography standards, said it remains committed to WS-Choreography. A company spokesperson said: 'Oracle feels it needs to participate in both groups to encourage co-operation between them.' Sun said it is also committed to WS-Choreography and wished to achieve alignment between the work of the W3C and OASIS..." See the news story: "OASIS Forms Web Services Business Process Execution Language TC (WSBPEL)."

  • [May 06, 2003] "OASIS Unit to Promote Business Process Specification." By Darryl K. Taft. In eWEEK (May 05, 2003). OASIS has "formally announced the formation of a technical committee to promote a standard for interoperable business processes and business process execution for Web services orchestration... BPEL4WS is an XML-based specification that deals with Web services-based business processes. IBM, Microsoft, BEA, Siebel and SAP will officially submit BPEL4WS Version 1.1 to OASIS under royalty-free terms on May 16, 2003 when the technical committee meets to consider submissions of related technology or standards efforts, OASIS officials said. Other companies that are involved with BPEL4WS and are members of the WSBPEL technical committee include Commerce One Operations Inc., E2open LLC, Electronic Data Systems Corp., Intalio Inc., NEC Corp., Novell Inc., SeeBeyond Technology Corp., Sybase Inc., TIBCO Software Inc., Vignette Corp. and Waveset Technologies Inc., among others. Derick Townsend, chief operating officer at OpenStorm Inc., said his Austin, Texas, company already has a Web service orchestration product, called ChoreoServer, that supports the BPEL4WS specification. 'We obviously believe that BPEL will become the de facto Web service orchestration standard,' Townsend said. Intalio, a San Mateo, Calif., provider of business process management systems, said it provides full support for BPEL4WS 1.0 and the Business Process Modeling Language specification. Company officials said native support for BPEL4WS 1.1 will be included in the next release of Intalio's core product..." See details in the news story: "OASIS Forms Web Services Business Process Execution Language TC (WSBPEL)."

  • [May 06, 2003] "Applications: XML, Web Services Pave the Way." By Shawn P. McCarthy. In Government Computer News Volume 22, Number 10 (May 05, 2003). "Application development is still in flux. But what is emerging is a picture of apps broken apart as developers migrate to building series of remote calls and services, of applications that interact with multiple data resources throughout the government. Such services may live anywhere from your server to some place far outside your network. For the short term, this technology path leads to use of Extensible Markup Language for information sharing. One example is the Navy's use of XML as it designs its new Navy-Marine Corps Intranet portal. Longer term, the trend is toward ever broader use of Web services. Experienced code jockeys will tell you the Web services idea is nothing new. The main concept, a remote-procedure call within an application, has been around for nearly 20 years. There are also more efficient ways to place such requests than the text-based protocols used by Web services. So what's changed? For the first time, a majority of the tech community has agreed on a unified way to call functions remotely on multiple systems. It's a vendor-neutral approach that can reach across manifold operating systems, databases and apps. But the transition is complicated enough, and the standards -- beyond XML anyway -- are hazy enough that the migration will not happen tomorrow or the next day. Talk to federal IT architects, you'll hear that most of them are experimenting a bit with XML and they have Web services on their radar screens. But without a standardized government tag set for marking XML data, or a government- wide approach framing an official way to implement Web services, it's risky for most to jump into the fray too soon. Avoidance of that risk will delay e-government and widespread interagency IT sharing... 'At the Bureau of Engraving and Printing we had success using an XML repository for enterprise architecture,' said Susan Warshaw, who recently moved from the bureau to become the IT enterprise architect for the Department of Health and Human Services. She said they used Metis, an enterprise architecture modeling tool, from Computas Inc. of Sammamish, Wash. Warshaw said one of the benefits of modeling the enterprise architecture is to know the extent of your enterprise and to establish communication with the application stakeholders as future plans are made... Ira Grossman, IT architect at the National Oceanic and Atmospheric Administration, said his organization is taking the same approach, outlining the business model of the enterprise first, and making decisions for the applications infrastructure based on that model. 'For most people, Web services are still on the horizon,' said Bill Wright, president and chief executive officer of Computas. His company is working with several agencies, including the Treasury and Commerce departments and some intelligence offices, to model their current enterprises before pressing ahead..."

  • [May 06, 2003] "XQuery Marks the Spot." By Jack Vaughan. In Application Development Trends (May 05, 2003). "XML has emerged in only five years as a startlingly powerful means of handling data. It has also been accompanied by a slew of 'X-centric' helper tools, APIs and standards such as XSLT, XPath and, of late, XQuery. Generally, XML must exist within the context of other software system languages, and so Java developers, .NET developers, Cobol developers, SQL developers and others have had to learn something about this markup language that grew out of SGML, a relatively obscure document-related language. With more developers encountering XML all the time, some will soon look at XQuery as an option for system building. According to the W3C group that is working to specify XQuery, the markup language will build on the idea that a query language using the structure of XML intelligently can express queries across all kinds of data. To cast some light on this technology, we spoke recently with Jonathan Robie, XML program manager at DataDirect Technologies as well as a member of the W3C's XML Query Working Group that is at work on XQuery. Robie is an editor of the XQuery specification and several related specifications. 'In many development environments, people have to work with relational data, XML and data found in objects. These have three very different data models -- three very different ways of representing and manipulating data,' said Robie. 'For XML and relational data, XQuery allows you to work in one common model, an XML-based model that can also be used with XML views of relational data.' Thus, the data architect on a team may soon begin to look at some type of XML model to handle diverse needs. XML is, among other things, hierarchical in structure, and some modelers may seek to exploit this and other attributes. Of course, some critics suggest one model may not turn out to be the answer. But some XML-centric apps may do well with an XML-centric model, Robie suggests. 'Most Web sites have some connection to a database. Many are using XML to transfer data from the database to the Web site. When you want to present hierarchical information to site users, you don't give them a series of tables and ask them to do joins in their mind,' Robie jokes. 'You create a hierarchy on screen as an outline or graphical representation that shows the hierarchy. All the relational databases can give you is a table,' he said..." See references and summaries in the news story "W3C Releases Ten Working Drafts for XQuery, XSLT, and XPath."

  • [May 05, 2003] "RSS Pushes Old Concept With New Technology." By Richard Karpinski. In BtoB-Online (May 05, 2003). "Savvy marketers are beginning to tap a promising new one-to-one channel called RSS (Rich Site Summary or Really Simple Syndication, depending on whom you ask). RSS is beginning to draw the attention of the b-to-b marketing community. What is RSS? It is an XML-formatted version of Web site content to which readers 'subscribe,' i.e., opt-in. The subscription sends the content to a desktop, where it is viewed with a new class of software application called a news aggregator. The aggregator lets the reader assemble the content into a personalized, multisource news feed. Typically, an RSS 'feed' consists of a headline, brief summary and a link that, when clicked, brings the reader to the content -- say a news story or a press release -- in its full, HTML-formatted glory. In many ways, RSS fulfills the promise of push by providing content creators with an affordable way to syndicate their data to readers -- recipients who can be current customers or potential prospects. Thousands of RSS feeds are already available, many created from content derived from increasingly viewed Weblogs. But recently, corporate giants have begun distributing content via RSS as well, putting this new technology more squarely in the b-to-b marketing mainstream. Some of the big names now using RSS include Apple Computer Inc., Microsoft Corp., Cisco Systems Inc. and IBM Corp., as well as b-to-b publishers such as Fawcette Technical Publications, IDG and Ziff Davis Media Inc... Companies such as Apple and Cisco have begun to use RSS to deliver corporate press releases, and publications like eWeek and InfoWorld are offering RSS feeds of their content from their home pages..." See Content Syndication with RSS, by Ben Hammersley. General references in "RDF Site Summary (RSS)."

  • [May 05, 2003] "IRS Makes XML Schema Its Business." By Patricia Daukantas. In Government Computer News Volume 22, Number 9 (April 28, 2003), page 16. "Extensible Markup Language is making its way into the IRS through some of the most common tax forms filed by businesses. The tax agency's first XML product is the 94x XML, which refers to Form 940 (for federal unemployment taxes), Form 941 (for employers' quarterly federal taxes) and other forms in that series. The IRS released its final XML schema for those forms in January. To expand electronic filing, IRS officials realized that they had to go beyond the forms for individual taxpayers and develop e-filing for business customers, said Pat Clarke, a senior manager in the Development Services Division of the IRS' Electronic Tax Administration. 'We realized we were going to have to do something a bit different to accommodate some of the large business returns and to also give us the flexibility to add on the features that our customers were describing that they needed,' Clarke said. Switching to XML allows the IRS to adopt single data formats for many different types of business tax forms and schedules, said Chuck Piekarski, acting director of the Development Services Division. A larger goal for the Modernized E-File program is converting the 1120 (corporate income taxes) and 990 (tax-exempt organizations) families of tax forms to XML. January 2004 is the agency's target start date for releasing those XML schemas, which could eventually encompass almost 100 forms and schedules, Piekarski said. Because forms 940 and 941 are shorter than the 1120s, it was easier for the IRS to learn how to use XML with them, Clarke said. The Form 1120 filings of small corporations are only about 10 pages long, but for large companies the page count can run into the thousands. IRS programmers did the Form 94x development work in-house with help from the Tax Information Group for Electronic-Commerce Requirements Standardization, an interagency effort of the Federation of Tax Administrators... The use of XML brings many new opportunities for IRS and the tax preparation software industry, Piekarski said. Paid preparers handle 85 percent to 90 percent of business tax returns, and the companies that provide tax-prep applications will be able to update their programs simply by downloading new schemas from the agency's software developer partners page on www.irs.gov, Clarke said. Although the end users of such software may not notice XML directly, they may reap other benefits. For example, the IRS will be able to send back e-filing error messages in English instead of obscure numerical codes, Piekarski said... Modernized E-File will accept attachments in Adobe Portable Document Format, allowing future business taxpayers to scan and attach signed documents, such as real estate and art appraisals, to their e-filings... XML advocates within the IRS are even starting to look beyond U.S. borders. One IRS official last year helped launch an effort to devise an international XML standard for exchanging tax data. Gregory Carson, director of electronic tax administration modernization for the IRS' Wage and Investment Operating Division, organized the first meeting of the Tax XML Technical Committee last December at the XML 2002 conference in Baltimore. Carson served as the group's interim chairman until a Dutch tax official took over as permanent chairman..." See: (1) "IRS Modernized e-File Team Releases New XML Schemas for Corporate Income Tax"; (2) "US Internal Revenue Service and SGML/XML for Tax Filing"; (3) "XML Markup Languages for Tax Information."

  • [May 05, 2003] "Office 2003 is XML-Friendly." By Carlos A. Soto. In Government Computer News Volume 22, Number 9 (April 28, 2003), pages 1, 12. "With Office 2003, Microsoft Corp. finally gets it: Workers value ease of use and Internet-friendliness more than fancy bells and whistles. Unlike its earlier suites, in which Office applications differed slightly from app to app, Microsoft deliberately has given the new Word, Excel, PowerPoint, Outlook and Access an identical look for easier switching among programs. The GCN Lab recently tried out the first and second beta versions of the suite, which Microsoft plans to release this summer... Office 2003 also has more compatibility with SharePoint Team Services, in anticipation of increased project collaboration. For example, users can now share each other's calendars, and multiple calendars can remain open simultaneously. The most notable difference in the new suite's versions of Excel and Access is Extensible Markup Language compatibility. Users can make what Microsoft calls workbook models to swap data to and from XML data sources. Excel 2003 can export to or receive XML data from user-specified databases or schemas. Access 2002 placed strict limits on altering XML data when importing or exporting; files had to follow Microsoft's schema. But Microsoft has made these operations less finicky in Access 2003. Users can customize Extensible Stylesheet Language files when importing or exporting data... Although the new Office was impressive, I did find a bad bug in both beta versions. On the GCN Lab network, with one system running Office 2003 and the others running Office 2002, Outlook 2003 deleted incoming e-mail and even existing messages from the in-boxes of the Outlook 2002 machines. It seemed like virus behavior, but the lab's security software reported none. Furthermore, the deletions began every time I opened Outlook 2003 and ended as soon as I closed it..." See: "Microsoft Office 11 and InfoPath [XDocs]."

  • [May 05, 2003] "Intelligence and Military Look to XML to Share Data." By Wilson P. Dizard III. In Government Computer News Volume 22, Number 9 (April 28, 2003), page 8. "The [US] Defense Department and the government's intelligence agencies are turning to Extensible Markup Language tags, registries and schemas as a way to share data across disparate systems. The main focus is the Intelligence Community System for Information Sharing, a backbone network shared by intelligence agencies. 'We are not inventing a new system but taking what is there and building on it,' said John Brantley, director of the Intelink Management Office, at the recent Secure E-Biz conference in Arlington, VA. Intelligence community officials have said they consider metadata tags critical to the process of rights management, so that agencies can control how subsequent users access information as well as copy and distribute it. The government's 14 intelligence agencies have launched a Metadata Working Group to coordinate their efforts... The Defense Information Systems Agency [DISA], meanwhile, has created an extensive XML registry, said Alesia Jones-Harewood, a DISA program manager... DISA's data emporium includes a mature XML registry that has more than 16,000 data elements and 139 XML schema, she said. Data elements are individual items of data content, and schema describe how tags and elements can be combined..." See also: (1) "Feds Expect XML to Ease Info Exchanges."; (2) "DII Common Operating Environment (COE) XML Registry."

  • [May 05, 2003] "Plain Text and XML: A Conversation with Andy Hunt and Dave Thomas, Part X." By Bill Venners. In Artima.com Interviews (April 30, 2003). ['Pragmatic Programmers Andy Hunt and Dave Thomas talk with Bill Venners about the value of storing persistent data in plain text and the ways they feel XML is being misused.'] Dave Thomas: '...One of the reasons we advocate using plain text is so information doesn't get lost when the program goes away. Even though a program has gone away, you can still extract information from a plain text document. You may not be able to make the information look like the original program would, but you can get the information out. The process is made even easier if the format of the plain text file is self-describing, such that you have metadata inside the file that you can use to extract out the actual semantic meaning of the data in the file. XML is not a particularly good way to do this, but it's currently the plain text transmission medium du jour. Another reason for using plain text is it allows you to write individual chunks of code that cooperate with each other. One of the classic examples of this is the Unix toolset: a set of small sharp tools that you can join together. You join them by feeding the plain text output of one into the plain text input of the next... [But] XML sucks, because it's being used wrongly. It is being used by people who view it as being an encapsulation of semantics and data, and it's not. XML is purely a way of structuring files, and as such, really doesn't add much to the overall picture. XML came from a document preparation tradition. First there was GML, a document preparation system, then SGML, a document preparation system, then HTML, a document preparation system, and now XML. All were designed as ways humans could structure documents. Now we've gotten to the point where XML has become so obscure and so complex to write, that it can no longer be written by people. If you talk to people in Sun about their libraries that generate XML, they say humans cannot read this. It's not designed for human consumption. Yet we're carrying around all the baggage that's in there, because it's designed for humans to read. So XML is a remarkably inefficient encoding system. It's a remarkably difficult to use encoding system, considering what it does. And yet it's become the lingua franca for talking between applications, and that strikes me as crazy.' Andy Hunt: 'It's sort of become the worst of both worlds'..."

  • [May 05, 2003] "Developing E-Business Interactions with JAXM." By Nikhil Patil (Cysive, Inc). From the O'Reilly Network ONJava.com (April 30, 2003). "JAXM, the Java API for XML Messaging, defines a lightweight Java API for composing, processing, sending, and receiving XML documents. The goal of JAXM is to provide a rich set of interfaces for document-style web services. Document-style web services enable the exchange of XML documents between two parties, as opposed to RPC-style web services, which expose software functions as web services. JAXM-based web services can be used effectively in the following scenarios: (1) When interactions between two parties are asynchronous in nature. JAXM provides support for both synchronous and asynchronous exchange of XML documents. (2) When two parties want to exchange data using XML documents that are bounded using well-defined XML schemas instead of invoking software functions (Java objects, C procedures, etc.) exposed as RPC-style web services... JAXM provides a good starting point for developing document-style web services that can promote the exchange of information between enterprises in a loosely coupled fashion through context-sensitive documents. Using JAXM, developers can build applications that are a combination of synchronous and asynchronous interactions..." See: (1) JAXM website; (2) JSR 67: Java APIs for XML Messaging 1.0; (3) local references in "Java XML-Based Messaging System (JAXM)"

  • [May 05, 2003] "Creating Richer Hyperlinks with JSP Custom Tags." By Amit Goel [WWW]. From the O'Reilly Network ONJava.com (April 30, 2003). "The linking mechanism currently supported by HTML (<a href="destination.html">) allows us to create hyperlinks that can have only one destination. To add context-related destinations to a hyperlink, we either need to supply them as links within parentheses (e.g., 'download PDF version,' 'download Word version,' etc.), or make the reader scroll to a different part of the page (e.g. a 'Resources' section), sometimes causing the reader to lose context. If we could somehow embed these additional destinations within a hyperlink, we could greatly enhance the usefulness of our pages. These embedded links could point to additional resources, allowing us to pack more content into the same amount of screen space while keeping related information close at hand... What is required [for multi-destination links] is a simple, declarative, tag-like syntax that is intuitive and easy to write, just like the two hypothetical constructs described above. This would allow the developer to focus on generating content instead of worrying about the programming. Before I describe my solution, I must mention that efforts are already underway to enable such sophisticated linking models for the Web. XML and its accompanying linking standards, such as XLink and XPointer -- both currently in W3C Recommendation status -- promise support for richer hyperlinking. But these advanced linking standards are still not completely supported by the popular browsers (Internet Explorer currently does not support XLink). Besides, XLink is quite complex, and can be overkill for the average web site... This article presents a simple approach to achieving multi-destination hyperlinks using a combination of JavaServer Pages (JSP) custom tags and XML... Multi-destination link functionality is new to most web users. Given the resistance among users to learn new concepts, training users on a totally new navigation metaphor can be a challenge. Therefore, the solution presented builds on an existing and familiar navigation model -- the menu. In a menu system, users can make several choices under one main topic. Depending on the choice they make, they will reach different destinations. Multi-destination links give users the choice of where they want to go, as opposed to single-destination links. This reduces the amount of time and traffic caused by searching through unrelated links. Also, storing the embedded link information separately from the HTML content document enables page creators to update these links independently of the HTML document. This opens up interesting possibilities like adaptive hypertexts -- hypertexts that change according to the user's profile, for example -- enabling the page creators to provide links that are more relevant to the user... Finally, the role of the [proposed] drop-down icon is particularly important. It visually indicates that the associated hyperlink contains additional hyperlinks, thus differentiating multi-destination links from regular single-destination links. In addition, the optional onmouseover capability eliminates the need to click on the image, reducing the need for one extra mouse click by the user... Due to the growing amount of information available on the Web, there is a great need to navigate the Web more easily. Enhancements like these cost very little and can add a lot of richness to the Web..." See also "XML Linking Language."

  • [May 01, 2003] "DSDL Interoperability Framework." By Eric van der Vlist. From XML.com (April 30, 2003). ['While W3C XML Schema has had rapid uptake in many web services and data-oriented XML applications, another set of technologies, ISO DSDL, has been under development by the self-proclaimed "document-heads." The needs of document validation are different, and markedly more pluralist, than those of the users of W3C XML Schema, and an ISO Working Group has started work on a standard to address those needs. Our main feature this week, from XML.com's resident schema expert Eric van der Vlist, briefly introduces ISO DSDL as well as the Document Schema Definition Languages, and gives an overview of the work underway to create the DSDL Interoperability Framework. This framework is essentially the glue that will join together the multiple schema languages and transformation steps supported by DSDL.'] "... Why does DSDL need an Interoperability Framework? The quick answer is that the Interoperability Framework is the glue between all the pieces of DSDL. The chief design principle of DSDL is to split the issue of describing and validating documents into simpler issues: grammar based validation, rule based validation, content selection, datatypes, and so on. Different types of validations and transformations, defined inside or outside the DSDL project, often need to be associated with each other. The framework allows for the integration of these validations and transformations. Examples of such mixing include localization of numeric or date formats, prevalidation canonicalization to simplify the expression of a schema, independent content separated into different documents to be validated independently, aggregation of complex content into a single text node, separation of structured simple content into a set of elements, and so on.. The two initial proposals (Schemachine and XVIF) were presented to the ISO DSDL working group in Baltimore (December 2002); although they were considered a valuable input, both were rejected, for different reasons: (1) Schemachine was considered 'too procedural': its focus is on defining pipes, that is, defining the algorithm used to validate a document, while it would be more appropriate to focus on defining the rules to meet to consider that a document is valid. (2) XVIF was considered too intrusive: to fully support XVIF, the semantics of the different schema languages must be extended and the schema validators need to be upgraded. An interoperability framework should work with existing schema languages and processors without requiring any update. To take these two requirements into account, a new proposal has been made which builds upon ideas from Schemachine and XVIF, but also from XSLT and Schematron. This proposal has been named 'XVIF/Outie', after a joke from Rick Jelliffe. A description of XVIF/Outie can be found online and a prototype implementation is available... Xvif/Outie or something derived from it should become an ISO DIS. I am also committed to develop Xvif and its micro pipes. When Outie becomes more stable, I will make sure to find a convergence between the two Xvif flavors..." See: (1) "Document Schema Definition Languages (DSDL)"; (2) "XML Schemas."

  • [May 01, 2003] "The Extensible Rule Markup Language." By Jae Kyu Lee (Professor of E-Commerce and Management Information Systems, Graduate School of Management at the Korea Advanced Institute of Science and Technology, Seoul) and Mye M. Sohn (Associate Research Fellow, Korea Institute for Defense Analyses, Seoul). In Communications of the ACM (CACM) Volume 46, Issue 5 (May 2003), pages 59-64. ISSN: 0001-0782. "XRML explicates the rules implicitly embedded in Web pages, enabling software agents to process the rules automatically... the implicit rules embedded in Web pages must be identifiable, interchangeable with structured-format rule-based systems, and accessible by applications. Thus XRML requires three components: (1) Rule Identification Markup Language (RIML). The meta-knowledge expressed in RIML should be able to identify the existence of implicit rules in the hypertexts on the Web; the formal association with the explicitly represented structured rules should also be identified. (2) Rule Structure Markup Language (RSML). The rules in knowledge-based systems (KBSs) must be represented in a formal structure so they can be processed with inference engines. The identified implicit rules are transformed into the formal rule structure of RSML. However, since there is no clue for linking two representations directly, we need an intermediate representation -- RSML. The rules represented in RSML should be transformed automatically into structured rules, while RSML needs the support of generation and maintenance from RIML in hypertext. (3) Rule Triggering Markup Language (RTML). RTML defines the conditions that trigger certain rules. RTML is embedded in KBSs, as well as in software agents (such as the forms in workflow management systems). ... Commercial-scale KBS/KMS convergence is inevitable because knowledge should be sharable by humans and software agents and is precisely the goal XRML researchers pursue today. The necessity of maintaining consistency between the hypertext knowledge in KMS and the structured rules in KBS is a key research issue in XRML development. XRML is thus a framework for integrating KBSs and KMSs. Generating RSML rules from hypertext can be regarded as a process of knowledge extraction; generating meta-knowledge of the relationships between hypertext and RSML rules (regarding which hypertext is related to which RSML rules and vice versa) is a process of meta-knowledge extraction. Knowledge acquisition from a variety of sources is generally very expensive, but knowledge extraction from existing hypertexts is less a social issue than a technical issue and thus can be cost-effective... XRML is thus a framework for integrating knowledge-based systems and knowledge-management systems... A sea of hypertext knowledge is already coded in markup language form on the Internet, so the cost of XRML applications is readily justified while yielding enormous benefit. XRML is not only the next step for KBSs and KMSs but also the direction rule markup language researchers worldwide are pursuing for the Semantic Web During the past two years, we designed XRML version 1.0 and developed its prototype -- called Form/XRML -- an automated form processing system for disbursing research funds at the Korea Advanced Institute of Science and Technology in Seoul. Since XRML allows humans, as well as software agents, to use the rules implicitly embedded in Web pages, the potential for its application in knowledge management is promising. XRML can also contribute to the progress of Semantic Web platforms, making knowledge management and e-commerce more intelligent. Since many research groups and vendors are investigating these issues, we can expect to see XRML in commercial products within a few years. Meanwhile, mature XRML applications may change the way information and knowledge systems are designed and used..." See: (1) XRML website; (2) local reference collection "Extensible Rule Markup Language (XRML)"(3) "Korean Research Institute Develops the Extensible Rule Markup Language (XRML)." See related markup vocabularies in "Markup Languages and Semantics." [sub. DOI link]

  • [May 01, 2003] "Web-based XML Editing with W3C XML Schema and XSLT." By Ali Mesbah. From XML.com (April 30, 2003). ['This feature focuses on schema technology, in particular, using W3C XML Schema documents to generate HTML forms-based user interfaces for XML document editing. Ali Mesbah presents the thinking behind the creation of "MetaXSL GUI," an XSLT stylesheet for creating forms from schemas.'] "The article describes a technique in which an XML instance document can be edited through an automatically created form-based GUI, based on the schema of the instance document. The whole cycle of GUI creation (using XSLT), editing, and updating (using XUpdate) XML instances is presented here; the concept is actually worked out for a web-based GUI, so a servlet could be used to receive and process the input from the user... The schema is transformed through a processor into a GUI. The processor makes an input field for every simple element that it encounters in the XSD document. A possible approach for this step was introduced in Transforming XML Schemas in which the XSD could be transformed into a form-based GUI using XSLT. After the schema is processed the user can fill in the data into the input fields and the data is sent to the servlet. The servlet will then use this incoming data to make an XML document, which would be valid against the original schema. The limitation of this approach is that it works only for the creation of a new XML instance document, as the processor can only transform the schema into a GUI. An existing XML instance cannot be edited, as the processor has no knowledge of transforming an XML instance document into a form-based GUI... There are different approaches possible for the way XSL-GUI can create a form-based GUI based on the instance document. The first approach would be using XForms, which is currently a W3C Candidate Recommendation. The XSL-GUI should then produce the right XForms XML document based on the XML instance document... A second approach would be Apache Cocoon XMLForm. The main difference between Cocoon XMLForm and W3C XForms is that XMLForm can be used with any browser for any client device as it does not require that the clients understand the XForms markup ... A possible third approach, presented in this article, is a combination of XPath, XUpdate, and W3C XML Schema validation. The reason we have chosen this approach is because it is not dependent on the browser, as browsers do not yet support XForms, nor on using Cocoon..."

  • [May 01, 2003] "RSS on the Client." By John E. Simpson. From XML.com (April 30, 2003). ['RSS is a simple format for syndicating web site metadata continues in its widespread adoption. This week John Simpson reviews which client-side applications are available for viewing RSS files.'] "... RSS (an acronym for RDF Site Summary) is just another XML vocabulary. An RSS feed is simply an XML document conforming to the rules of that vocabulary and served up to some client program. For example, an RSS document has a root rss element, which has numerous subordinate elements, all of which may have various attributes. If you're interested in the details of the RSS vocabulary, an excellent introduction is Mark Pilgrim's XML.com feature of a few months ago; Part 1 is especially helpful for newcomers and Part 2 deals primarily with using Python to process RSS feeds... simply aiming your Web browser at an RSS feed is not the way to go. Browsers, even the late-model ones which tout their support for XML, can't handle everything delivered to them. They can handle 'XML' all right, using one-size-fits-all rules of thumb (like elements nested within other elements, and so on); it's the specific vocabularies, like RSS, which they don't know enough about to process in any meaningful way. So when you open an RSS document, your browser treats it the same way it does any other XML document: dumps the raw source to your screen. For now, the solution is to acquire a separate client program, called an RSS aggregator, RSS client, or RSS reader, to collect and process your feeds. RSS reading features are sometimes bundled into Usenet newsreader programs. While the underlying technology is not the same, from a user's perspective 'subscribing' to an RSS feed feels an awful lot like 'subscribing' to a newsgroup..." General references in "RDF Site Summary (RSS)."

  • [May 01, 2003] "Public Key Cryptography Demystified." By Robert J. Brentrup. In Syllabus Magazine Volume 16, Number 10 (May 2003), pages 29-31, 41. "As the technology of computing has become more integrated into our daily lives, information security is becoming an increasing challenge. More and more confidential personal information, legal documents, commercial transactions, and sensitive data are being transmitted over campus networks and the Internet every day. At the same time, the network environment is becoming more hostile and vulnerable to attack. Public key technology has an important role to play in helping us protect our information and to be able to rely on the network to handle transactions of increasing value. Public key systems enable separate parties to conduct a trusted exchange of information even if they have never met or shared no secrets beforehand... PKI is the acronym for Public Key Infrastructure. The technology is called Public Key because unlike earlier forms of cryptography, it works with a pair of keys. One of the two keys may be used to encrypt information, which can only be decrypted with the other key. One key is made public and the other is kept secret. The secret key is usually called the private key. Since anyone may obtain the public key, users may initiate secure communications without having to previously share a secret through some other medium with their correspondent. The Infrastructure part of PKI is the underlying systems needed to issue keys and certificates and to publish the public information. Public Key Certificates: A public key needs to be associated with the name of its owner. This is done using a public key certificate, which is a data structure containing the owner's name, their public key and e-mail address, validity dates for the certificate, the location of revocation information, the location of the issuer's policies, and possibly other information, such as their affiliation with the certificate issuer (often an employer or institution). The certificate data structure is signed with the private key of the issuer so that a recipient can verify the identity of the signer and prove that data in the certificate has not been altered. Public key certificates are then published, often in an institutional LDAP directory, so that users of the PKI can locate the certificate for an individual with whom they wish to communicate securely..." General references: OASIS PKI Member Section.

  • [May 01, 2003] "Is XQuery an Omni-Tool?" By Uche Ogbuji. In Application Development Trends (May 01, 2003). "Most builders would scoff at the idea of replacing all of their specialized implements with an omni-tool, but is XML a different story? After all, with XQuery approaching recommendation status, one could argue that XML is about to get its very own omni-tool. XQuery can be a simple XML node access language, largely XPath 1.0 with tweaks and more core functions. It offers SQL-like primitives to process large XML data sets, and is a strongly typed system that offers static and dynamic typing (controversial features I've noted in the past). It also has primitives for generating output XML -- and the lack of separation of input and output facilities is questionable. XQuery does not yet allow XML database updating, but update language proposals are under consideration, and XQuery should soon add them. XQuery is a very important development for several reasons. For one, it comes with a very sophisticated formal model and semantics developed by some of the finest minds in the business. This may seem to be the least-functional attachment to the tool, but it is actually like the oil that keeps the motor from seizing. Given the formal model, a lot of questions about the effectiveness and operation of queries can be answered deterministically. This is important for consistent implementations and advanced techniques. The data model of XQuery is designed to be friendly to optimizers, which should allow for fast implementations. For users of W3C XML Schema Language, XQuery provides a very rich interface to the Post Schema Validation Infoset. XQuery syntax builds on the XPath expression language and adds a few SQL-like primitives. There is also an experimental XML-based syntax for the language, but it has been put aside for the moment. But in covering so many bases, XQuery attempts to set an overall standard for an XML processing model and mechanism prematurely. With its sprawling and ambitious requirements/use cases, XQuery claims territory in almost every task typical for developers using XML... Despite this, XQuery has a formalism of thinking about XML processing that can be selectively adopted by specialized tools. The danger is that too many will associate XQuery too closely as the heart of XML processing, obscuring superior solutions for each case..." See: (1) W3C XML Query working group; (2) general references in "XML and Query Languages."

  • [May 01, 2003] "XNS Addressing Specification v1.1." Submission by XNSORG of the XNS Addressing Specification v1.1 to the OASIS Extensible Resource Identifier Technical Committee. Document posted 2003-05-01 by Drummond Reed to the OASIS Extensible Resource Identifier TC document repository. Edited by Dave McAlpin (Epok Inc.) and Drummond Reed (OneName Corporation). March 26, 2003. 21 pages. Contributors: Mike Lindelsee (Visa International), Gabe Wachob (Visa International), and Loren West (Epok Inc.) "W3C XPath 1.0 Recommendation establishes a standard syntax for addressing the nodes of a structured XML document. Since XNS Addressing provides addresses for a network of linked XML documents, it has a similar need for a standardized syntax. However unlike XPath, which was designed primarily for programmatic use (and includes many additional functions for querying data sets within an XML document), XNS addressing must fulfill requirements for machine efficiency, human usability, and identity persistence. For a complete discussion of those requirements, please see the XNSORG white paper 'Name Service to Identity Service: How XNS builds on the DNS Model'. This specification provides the normative rules for XNS address validity. It includes four rule sets: (1) The EBNF definition of XNS addressing syntax; (2) The EBNF definition for URI encoding of XNS addresses and XNS service invocations; (3) XNS ID normalization rules; (4) XNS Name normalization rules... Three of the most fundamental requirements of XNS addressing are the ability to: [1] Provide an abstraction layer capable of representing the identity of any network actor or entity -- machine, network location, application, user, business, taxonomy category, etc. [2] Enable this identity to persist for the lifetime of the resource it represents, and [3] Enable this identity abstraction layer to be federated across any number of communities for fully decentralized, delegated identity management. To meet these requirements XNS addressing follows the architectural principle of semantic abstraction -- separating non-persistent semantic identifiers (names) from persistent abstract identifiers (IDs). In most computer naming systems, a name is resolved directly to the physical location of a resource -- a file on a disk, a host machine on a network, a record in a database. In XNS addressing a name is normally resolved to an XNS ID, which in turn resolves to the network location of the identity document or a node within it. This network location is expressed as a Uniform Resource Identifier (URI) [4]. Since URIs do not require persistence of the address, an XNS ID meets the higher persistence requirements of a Uniform Resource Name (URN). Since the address of an identity may use a name, an ID, or both, XNS addressing supports all three concepts: IDs are persistent addressing values intended primarily for machine use; Names are non-persistent addressing values intended primarily for human use; Addresses are a composite addressing type that can consist of either an XNS ID or an XNS name or a combination of both. In the latter case the XNS ID is authoritative and the XNS name always serves as a human-readable comment..." [source .DOC format]

  • [May 01, 2003] "The (Data) Medium is the Message." By Simon St. Laurent. From O'Reilly Developer Weblogs (April 30, 2003). "...While the developer's view of information politics is usually more local and better understood than mass media politics and the FCC, the differences between media persist. Relational databases are all about linked tables and structured atomic data and the possibilities that opens. Object stores and serializations are generally about flexible hierarchies, with relatively direct linkages to to particular processing expectations. XML is about labeled hierarchically-structured containers, with a general separation between content and processing expectations... RDF is about directed graphs, keeping away specific processing expectations regarding their content but with a well-defined general model for manipulating the graphs. Plain text, of course, offers a sequence which may or may not contain identifiable repetitive structures. Perhaps the most important thing to recognize about all of these forms is that they are different. There are, of course, cases where relationally-modeled information can be represented as objects, XML or RDF, and there are cases where object stores or RDF triple stores use relational databases as back-ends, but these all involve tinkering and compromises. There is no general way for an XML document to serve as an efficient foundation for relational queries, nor is RDF much good at modeling XML's mixed content. While it may be convenient in some cases to serialize objects to XML, it requires lots of metadata if the object needs to be reconstituted in the same form, and the XML produced by serializations often looks alien to people who actually care to work with XML itself. At the same time, these different approaches do particular tasks very well. The relational model allows the efficient processing of vast quantities of information stored in unordered rows in related tables. Object stores let developers put objects somewhere without having to spend time creating pathways between their existing model and a different model. XML comes from the document world, and most of its functionality is aimed at creating labeled structured content that both humans and computers can process. RDF is about assertions and how they combine to make statements, and while humans frequently have a hard time making sense of URI chains, some programmers find they solve classification and other problems easily. XML currently carries the unfortunate burden of being the medium the other forms think they understand. Object serializations in particular produce an enormous amount of lousy markup. Technically, it's XML, but its creators plainly cared about their program and not much about XML, or how anyone else might want to process the XML they create... RDF creates similar problems for XML, as lately there's been a flurry of proposals for 'fixing' XML with RDF tools and structures..." On XML normalization, see the following reference and the XML-DEV list thread.

  • [May 01, 2003] "A Normal Form for XML Documents." By Li-Yan Yuan (Professor, Department of Computing Science, University of Alberta, Canada). 40 pages. Reading reference for the course "Modern Database Management Systems" (Winter Term, 2003); "this course covers research topics in advanced database management systems as well as emerging database techonologies, with emphasis on XML data and XML support for object-oriented database management systems... Given a DTD, and a set F of FDs, ( D, F ) is in XML normal form (XNF) if and only if for every nontrivial FD of the form S --> p.@l or S --> p.S, it is the case that S--> p is implied by F. The presentation references the paper "A Normal Form for XML Documents", by M. Arenas and L. Libkin, published in the Proceedings of ACM PODS02. General references in "XML and Databases." [cache]

  • [May 01, 2003] "An Information-Theoretic Approach to Normal Forms for Relational and XML Data." By Marcelo Arenas and Leonid Libkin (University of Toronto). Paper for presentation at the 22nd ACM SIGACT-SIGMOD-SIGART Symposium on Principles of Database Systems (PODS 2003), San Diego, USA, [June 9-12] 2003. "Normalization as a way of producing good database designs is a well understood topic. However, the same problem of distinguishing well designed databases from poorly designed ones arises in other data models, in particular, XML. While in the relational world the criteria for being well designed are usually very intuitive and clear to state, they become more obscure when one moves to more complex data models. Our goal is to provide a set of tools for testing when a condition on a database design, specified by a normal form, corresponds to a good design. We use techniques of information theory, and define a measure of information content of elements in a database with respect to a set of constraints. We first test this measure in the relational context, providing information theoretic justification for familiar normal forms such as BCNF, 4NF, PJ/NF, 5NFR, DK/NF. We then show that the same measure applies in the XML context, which gives us a characterization of a recently introduced XML normal form called XNF. Finally, we look at information theoretic criteria for justifying normalization algorithms. Several [other] papers attempted a more formal evaluation of normal forms, by relating it to the elimination of update anomalies. Another criterion is the existence of algorithms that produce good designs: for example, we know that every database scheme can be losslessly decomposed into one in BCNF, but some constraints may be lost along the way... Our [research] goal was to find criteria for good data design, based on the intrinsic properties of a data model rather than tools built on top of it, such as query and update languages. We were motivated by the justification of normal forms for XML, where usual criteria based on update anomalies or existence of lossless decompositions are not applicable until we have standard and universally acceptable query and update languages. We proposed to use techniques from information theory, and measure the information content of elements in a database with respect to a set of constraints. We tested this approach in the relational case and showed that it works: that is, it characterizes the familiar normal forms such as BCNF and 4NF as precisely those corresponding to good designs, and justifies others, more complicated ones, involving join dependencies. We then showed that the approach straightforwardly extends to the XML setting, and for the case of constraints given by functional dependencies, equates the normal form XNF of ["A Normal Form for XML Documents", by M. Arenas and L. Libkin, published in the Proceedings of ACM PODS02] with good designs. In general, the approach is very robust: although we do not show it here due to space limitations, it can be easily adapted to the nested relational model, where it justifies a normal form NNF..." General references in "XML and Databases." [cache]

  • [May 01, 2003] "JXTA and Peer-to-Peer Networks." By Sing Li. In Dr. Dobb's Journal #349 Volume 28, Issue 6 (June 2003), pages 30-34. "JXTA is an open-source development project for creating a P2P substrate that's applicable to any hardware or software platforms. In this article I examine the difficulty in creating a generic presence solution, them present a workable solution for a P2P chat application on a JXTA P2P network. The presence solution I present is built on TINI, a Java-based embedded controller from Maxim/Dallas Semiconductors..." See also the listings and source code. Compare "P2P JXTA: Not Your Father's Client/Server," by Daniel Brookshier, Sing Li, and Brendon J. Wilson.

  • [May 01, 2003] "An Embeddable Lightweight XML-RPC Server." By M. Tim Jones. In Dr. Dobb's Journal #349 Volume 28, Issue 6 (June 2003), pages 60-67. Embedded Systems. "In this article I'll examine the XML-RPC protocol for providing network-based RPCs, present a lightweight server for embedded designs, and take a look at two XML-RPC clients written in C and Python that communicate with the lightweight XML-RPC server... XML-RPC is an interesting protocol for the development of distributed systems, which permits dynamically linking applications developed in a variety of languages on a variety of different processor architectures. XML-RPC is also very easy to debug since all of the messages are immediately readable by the developer..." See also the listings and source code. General references in "XML-RPC."

  • [May 01, 2003] "Tip: Make Your CGI Scripts Available via XML-RPC. Providing a Programmatic Interface to Web Services." By David Mertz, Ph.D. (Interfacer, Gnosis Software, Inc). From IBM developerWorks, XML zone. ['For a large class of CGI scripts, it is both easy and useful to provide an alternate XML-RPC interface to the same calculation or lookup. If you do this, other developers can quickly utilize the information you provide within their own larger applications. This tip shows you how.'] "Many CGI scripts are, at their heart, just a form of remote procedure call. A user specifies some information, perhaps in an HTML form, and your Web server returns a formatted page that contains an answer to their inquiry. The data on this return page is surrounded by some HTML markup, but basically it is the data that is of interest. Examples of data-oriented CGI interfaces are search engines, stock price checks, weather reports, personal information lookup, catalog inventory, and so on. A Web browser is a fine interface for human eyes, but a returned HTML page is not an optimal format for integration within custom applications. What programmers often do to utilize the data that comes from CGI queries is screen-scraping of returned pages -- that is, they look for identifiable markup and contents, and pull data elements from the text. But screen-scraping is error-prone; page layout might change over time or might be dependent on the specific results. A more formal API is better for programmatic access to your CGI functionality. XML-RPC is specifically designed to enable an application access to queryable results over an HTTP channel. Its sibling, SOAP, can do a similar job, but the XML format of the SOAP is more complicated than is needed for most purposes. An ideal system is one where people can make queries in a Web browser, while custom applications can make the same queries using XML-RPC. The underlying server can do almost exactly the same thing in either case... There is a difference in the way a CGI script runs and the way this XML-RPC server runs. The XML-RPC server is its own process (and uses its own port). CGI scripts, on the other hand, are automatically generated by a general HTTP server. But both still travel over HTTP (or HTTPS) layers, so any issues with firewalls, statefulness, and the like remain identical. Moreover, some general-purpose HTTP servers support XML-RPC internally. But if, like me, you do not control the configuration of your Web host, it is easier to write a stand-alone XML-RPC server..." General references in "XML-RPC."

Earlier XML Articles


Hosted By
OASIS - Organization for the Advancement of Structured Information Standards

Sponsored By

IBM Corporation
ISIS Papyrus
Microsoft Corporation
Oracle Corporation

Primeton

XML Daily Newslink
Receive daily news updates from Managing Editor, Robin Cover.

 Newsletter Subscription
 Newsletter Archives
Globe Image

Document URI: http://xml.coverpages.org/xmlPapers200305.html  —  Legal stuff
Robin Cover, Editor: robin@oasis-open.org