Attendees: (for LCSC only) Tim McGrath (chair), Ray Seddigh, Bill Meadows, Leo Obrst, Garrett Minikowa, Stig Korsgaard, Lisa Seaburg, Sue Probert, Jean Luc, Mike Adcock, Frank Thome, Peter Yim, Marion Royal, Michael Roytman, Bob Glushko, Alan Stitzer, Garrett Minikowa, Chris Doyle, Peter Benson, Sandeep Singal, Lisa Carnahan, John Barkley.
Started in the morning we had the plenary (see Plenary notes), then the group got together to set up work plan for the week.
We had a Ontology Tutorial at the lunch break. This was very interesting, key point to come out of this is: are there tools for us to use to bring these ideas into our work. The discussion on Ontology is very abstract. Our feelings generally seemed to be that 1) it is important and 2) that it could slow us down without tools. Peter and Leo are to write a position paper.
Ontology: Discussion on Semantics - Leo and Peter helped us go through what we were discussing on Monday during the tutorial on Ontology.
Discussion contained comments from Mike Adcock and Sue Probert, about how we should be working with this. We should be continuing the work, learning from the work. We haven't yet defined the problem, that to suggest a solution would be too early.
Leo Obrst got up and talked to group for short time about Code and Identifier.
Tim brought up that what we are looking for is a discipline that we could employ, not a debate over the issue.
Sue brought up a discussion that this same discussion has been going on for awhile. We are interested in tools and techniques in the Ontological world that can help us work through this. Instead of debating the issue, we need to figure out how to employ the discipline. If someone has already come up with tools or ways to deal with the problem, then we should be using them.
Leo spoke about tools and analysis. Formal semantics and natural language is part of the analysis. One tool noted was Protege.
An example, take the idea of party, does that label work for what the word means. If the semantics that we are trying to convey, is carried in the definitions, do these tools allow us to clarify and define better our definitions?
Summary, use this to remove ambiguity from our definitions as best we can.
Discussion about the automation of the spreadsheets into XML schema. The hows and whys of the facets information being kept in the spreadsheets. The source or Master is the spreadsheet, until a better tool comes along it is where all of the master information is kept. We will need to keep some of the XSD information within the spreadsheet, not the most ideal way, but its what we have today.
Some of the Datatypes are: length, min-/max- Length, pattern, enumeration, whiteSpace, min- /max-Inclusive, min- /max-Exclusive, totalDigits, fractionDigits.
The LCSC also has issues of treating XML features as extensions of our models.
Recent CCTS meeting had clarified some of the problematic terms to UBL. ndr team reaffirmed that whatever the eventual outcome of the disposition of the UBL comments solutions can be found.
NDR want to ensure that context is captured sufficiently during its BIE definition work so that vanilla versions could be transformed into practical contextual deliverables. Tim asked whether we have to do more than already planned or what different methodology should be used. One key is undoubtedly 'what are the values'! The context drivers sub-committee should provide these answers - it was agreed to convene this sub-committee with a subset of lc and ndr members to take this forward. Thursday 2-3 pm agreed with the aim of providing lc with a clear methodology for populating the context information.
Containerships were discussed. ndr has prioritised this as a 'b' and Arofan has been tasked to produce a discussion paper on this.
Code lists - supplementary components necessary to define these have been well documented by ebXML CC TS but question is how does UBL implement these? Another question is how to ensure interoperability of compliant lists.
External Maintenance, means the ability for non-UBL organizations to create XSD schema modules that define code lists in a way that allows UBL to reuse them without modification on anyones part.
The winner was multiple namespace types method. This is available to read in the NDR document. A discussion about how and why this was chosen went on. This choice allows us to push off the responsibility of maintaining the code list, we are not in that business. That is the owners of the code lists job. Do we need a naming convention for this? ISO3166 might cover this. There are facets in codelists, and we have to remember to leave room for this. We do not want to put ourselves into this corner. The enumeration is a facet.
We should write down our questions and create a FAQ from these.
Disposition of Comments
Discussion about using our time took place this morning. We have been discussing the Ontology with Peter, we also have the UDEF discussion going on. Is it condusive to us now to continue these discussions. We have so much work ahead of us, at some point there is a line that we need to draw so we do not get distracted from the work.
Discussion about terminology within the elements took place. This came out of the comment from Eve's group #18, about using specific words that have different meanings in different industries. We try for the generic terms, we discussed adding a column for synonyms (business terms). Should we be using a separate columns or should we use the description column we already have which we are using for annotations. We don't have a better technology for doing this. If we use get this information into our xml, using the script later we can use this somehow with context. This moves us into the context area, where does this fit in and how do we do it?
Discussion points:
Their document did not make the timeline for getting approved as a Technical Report. They are hoping to get it approved at the October meeting in Miami.
Lisa Shreve started the meeting with her overview of communication and terminology. Mike Rawlins also gave his overview of the architecture of this document. There was a lively discussion, between the groups.
Eve then gave her presentation to the group. She had a 40 minutes presentation that she gave in about 30 minutes. Quickly running through the technical stuff so that the group would at least get a glimpse of the depth of the work done. Tim also gave 5 minutes to the Library Content group.
See the disposition list above for all final disposition on the comments as we worked through them.
XBRL has pricing dealt with in it. Ray felt we needed to take a look at it. There was discussion about the idea of going outside of the Order model to get the pricing information. We thought we should look at them, but keep in mind that we are building a single instance of this information. We will be creating more types, that will connect to the one we are building. It is just not the time to do all of them right now.
Finish going through the Comments document.
There was discussion on Comment 61, duration. ACORD has some very strong ideas about how this should be handled. Gunther, Mike and Alan will be doing further research on this.
Total of 72 comments.
11 deferred back to Gunther
We deferred Martin Bryants comments until the CCTS stabilizes the CCTs. All of his comments go back to the use of the CCTs.
Discussion of a position paper on methodology needs to be put together. Bob and Tim are being put to this first. As a group we outlined what we thought should be in the paper.
It was brought up that we need to make the terminology more stable. The term bringing this up is the Core Library or Common Library. The discussion was about keeping the public from being confused about what we are talking about. For a new comer they need to know that this is a library of what? It may be obvious to us, but we need to keep the new comer in mind. It's a library of BIE's. We want to capture a name that gives a clear idea of what it is. UBL Library was brought up as a name.
This is from the Charter: It's the UBL library of BIE expressed as XML Schema.
Alan Stitzer talked to group about how ACORD handles their Data types and reusable types. The business people give the requirements for each transaction. The group questioned Alan about the process they use to develop new data elements and processes. He responded that often, it is just done, without formal process. ACORD generates their stuff from an Access database. Their UIDs are system generated.
Is there a model somewhere of this entire library? It is a set of tables that are linked in Access. When new business requirements come up, they use people who are familiar with the spec to help figure out what is needed new.
Do they have a way of engineering structures? Not automated. The process comes from human brain power, that over sees the working groups to make sure that the reuse is there. It is not formal.
KEY ISSUE: How do we avoid over-development of data elements, not using the library? There is nothing formal, it is an issue within the ACCORD working groups. The question is how do you make this effort scalable across this development process with the outside working groups developing extensions to the core library. Making sure they reuse where it is needed.
We as a group go through a set of informal processes, but need to figure out how to make this work across a larger sets of groups. There is an architectural working group at ACORD that reviews and helps with this issue.
Thoughts from the discussion:
More discussion points:
Stage One: Developing the documents
Stage Two: Hand your users the generic document, have them take out the bits they don't need and add the new things they may need (this would be their extensions).
A metholodgy starting from a blank piece of paper, would generate less extended elements.
Frank: Start with the Core Library, once you have it you start with a context driver, Identify messages, identify interfaces needed, talk to business experts, ideally they then go to their library to find what they need. Sounds similar to the process to Alan's process
Some of this goes back to using containership to allow for choices, this way you can have more available from the libary.
Discussion about tools and techniques: we need to push back on our tools and techniques group. Could we go to the vendors and ask them to be providers for UBL?? It might be beneficial for us to do this. What we are competing against is not tremendously attractive. There should be a lot of adoption on the basis that this is the easiest. The ability to create lots of wonderful complex extensions, but it's not the be all and end all of the use behind out library. How do we have horizontal integration with people who already have their own library within their own industry.
SAP sees using the core library and extending it into what they need. This is the vendor point of view.
ACORD also sees doing the same. Use the core library, they don't want to have to maintain the core.
ACTION ITEM: Tim and Bob write up the draft of the methodology document. We put together an outline to help get it started.
We met with the Context Drivers working group. Discussion of how we are going to go about making a start with this. The first deliverable being making a stab at the first list of context drivers.
Idea is using current schema and use that to develop something that will show how the mechanism works.
Future Work Plan.
There was a joint meeting with X12C about the Naming and Design Rules. Apparently they have an interest in adopting our rules. That is up to them. Our primary focus is UBL........whatever they decide will be up to them, we can follow up, but we don't have to move down to their level. This decision has not been made, nor does it need to be pushed.
The NDR group will have their work done by the end of the year. They will be integrating further into the Library Content group over that time. All of the work they have done, needs to come to the LCSC. We are so tightly coupled we need more joint work.
The LCSC needs to read the Dates Times paper from Gunther. This answers some of the questions we have been struggling with.
There had originally been a suggestion that when NDR is done, that group become a sort of QA for the LCSC. This will be discussed.
The group meet this week. They discussed Use Cases. TAAT, Paella, Schematron.
Proposals were brought up. They had several....get papers.
This is enormous.
PRIORITY ITEMS:
Met during the week to determine the scope of its effort in relation to the other relevant SCs and also to gather volunteers.
Next Face to Face probably in Boston?? There will be further work on this.