This is a rough mock-up of the data model. It gives an idea of what can be done just with lists and substitution. The real thing is available by invitation.
Agt_Frame: |
|
* |
[General_Resources] |
Body |
{Prologue}
- {Sec.S}
{Signature} |
Prologue |
{Agt.Title}{Prologue.ThisAgreement} … |
Prologue.ThisAgreement |
This {Agt.Title} is made by and between {Pcpl.ID.N,E,A} (“{Pcpl.-}”) and {Cttr.ID.N,E,A} (“{Cttr.-}”) |
Signature |
{Pcpl-Cttr.Signature._Block} |
Agt_NDA: |
|
* |
[Agt_Frame] |
Sec.S |
- {Def._Sec}
- {Conf._Sec}
- {IP._Sec}
- {Misc._Sec}
|
Def._Sec |
Definitions.
- {Def.Confidential_Information}
- Def.Work_Product}
- ….
|
Conf.Except.Makes._Cl |
is independently developed by the Receiving Party without the use of any Confidential Information |
ID_Acme: |
|
ID.* |
[Class_ID_Entity_Corp] |
ID.Addr.* |
[Geo_USA_CA_RedwoodCity] |
Regist.* |
[Geo_USA_DE] |
Name.Full |
Acme Incorporated |
Acme_Supren_NDA_d-01: |
|
* |
[Agt_NDA] |
Pcpl.* |
[ID_Acme] |
Cttr.* |
[ID_Supren] |
Agt.Date |
May 19, 2013 |
Conf.Except.Makes._Cl |
{_empty} |
Like this:
Like Loading...
I really like this idea. It would be great to demonstrate it in some more widely, obviously beneficial way. For example take this:
http://techland.time.com/2012/03/06/youd-need-76-work-days-to-read-all-your-privacy-policies-each-year/
Those policies are broken, in that it is completely unreasonable for people to be actually understand and agree to things through sheer volume. If Terms of Service and Privacy policies were to be standardized and codified, to the point where the vast majority of it is just references to a standard one, noting exceptions, then:
— One would only read the standard policies once, and the exceptions once.
— one could write accurate, short summaries of the standard policies that people could understand. One could have options on the code generation to give full legalese, or plain language summary, so that a reasonable person would be able to read a short and sweet version, save perhaps 75 of those 76 working days,
Yes, exactly. Though I’d put it this way: a person should not have to become an expert on law, or close-reader of legal text. They should be able to evaluate a legal “solution” in the same way they evaluate an app. By reviews, number of downloads, whether their friends use it … rather than by inspection of the source code.
There were (are?) attempts to standardize privacy policies through semi-computer readable clauses. (http://www.w3.org/P3P/). That model would have addressed the concern you mention. Of course, there are many stakeholders whose interests do not lie in privacy transparency beyond the practical punishment limit (meaning they’ll do the minimum required to avoid actually being punished, even if that means deliberately violating the law on a regular basis — cf. the NSA).
My recollection is that the plan was more standardized than customized, but that doesn’t make sense. To me, the model would be much like the Creative Commons (http://creativecommons.org) licensing model, where a two-letter shortcode signals the application of a much larger bit of text implementing a specific license grant or restriction – “CC-BY-NC”.
I’d have to dig back into the P3P project to see whether they got to that simplified of a model, but it seems to me, as someone who counsels clients about the need to deliberately design their privacy policy in light of their actual business goals, that CC-style simplification of complexity is worthy of emulation in many of these consumer-facing “contracts.”
Rick,
I admire your writing. I’ve only a very short time now, and will answer more fully.
Yes, there are many attempts to make legal text computable. The FOSS license SPDX project is another significant one. At the Future of Law program at Stanford (find via robotandhwang.com) there is a great session on computable contracting.
My observation is that these suffer from the general problem of “semantic web,” they depend on standardization of taxonomies rather than iteration (like software develops).
The branding approach “CC-BY-NC” is great, but depends on brand recognition. Same problem as semantic web.
Software develops (generally, in FOSS, and on GitHub) by iteration. A job turns into modules, which become libraries and then an application. Legal text can do the same. The links provide the paths. Counting of links tell us whether we are on a well-beaten path (and something about who has beaten it). So we don’t need to remember the labels. But of course we can add them and they will come to have that branding effect.
“Series A on the NVCA model with the Wilson Sonsini overlay, Santa Clara jurisdiction” will gin up draft one.
A really interesting bit would be a compiler for existing law, avoiding the bootstrap problem.
[…] Data Model […]