Files @ 05aabe3d7b02
Branch filter:

Location: light9/bin/rdfdb

Drew Perttula
fix some minor issues with graph contexts
Ignore-this: 7fd366a4b6cbd94af6f9edb1b4b4c6fc
#!bin/python
"""
other tools POST themselves to here as subscribers to the graph. They
are providing a URL we can PUT to with graph updates.

we immediately PUT them back all the contents of the graph as a bunch
of adds.

later we PUT them back with patches (del/add lists) when there are
changes.

If we fail to reach a registered caller, we forget about it for future
calls. We could PUT empty diffs as a heartbeat to notice disappearing
callers faster.

A caller can submit a patch which we'll persist and broadcast to every
other client.

Global data undo should probably happen within this service. Some
operations should not support undo, such as updating the default
position of a window. How will we separate those? A blacklist of
subj+pred pairs that don't save undo? Or just save the updates like
everything else, but when you press undo, there's a way to tell which
updates *should* be part of your app's undo system?

Maybe some subgraphs are for transient data (e.g. current timecode,
mouse position in curvecalc) that only some listeners want to hear about.

Deletes are graph-specific, so callers may be surprised to delete a
stmt from one graph but then find that statement is still true.

Alternate plan: would it help to insist that every patch is within
only one subgraph? I think it's ok for them to span multiple ones.

Inserts can be made on any subgraphs, and each subgraph is saved in
its own file. The file might not be in a format that can express
graphs, so I'm just going to not store the subgraph URI in any file.

I don't support wildcard deletes, and there are race conditions where a
s-p could end up with unexpected multiple objects. Every client needs
to be ready for this.

We watch the files and push their own changes back to the clients.

Persist our client list, to survive restarts. In another rdf file? A
random json one? memcache? Also hold the recent changes. We're not
logging everything forever, though, since the output files and a VCS
shall be used for that

Bnodes: this rdfdb graph might be able to track bnodes correctly, and
they make for more compact n3 files. I'm not sure if it's going to be
hard to keep the client bnodes in sync though. File rereads would be
hard, if ever a bnode was used across graphs, so that probably should
not be allowed.

Our API:

GET /  ui
GET /graph    the whole graph, or a query from it (needed? just for ui browsing?)
PUT /patches  clients submit changes
GET /patches  (recent) patches from clients
POST /graphClients clientUpdate={uri} to subscribe
GET /graphClients  current clients

format:
json {"adds" : [[quads]...],
      "deletes": [[quads]],
      "senderUpdateUri" : tooluri,
      "created":tttt // maybe to help resolve some conflicts
     }
maybe use some http://json-ld.org/ in there.

proposed rule feature:
rdfdb should be able to watch a pair of (sourceFile, rulesFile) and
rerun the rules when either one changes. Should the sourceFile be able
to specify its own rules file?  That would be easier
configuration. How do edits work? Not allowed?  Patch the source only?
Also see the source graph loaded into a different ctx, and you can
edit that one and see the results in the output context?

Our web ui:

  sections

    registered clients

    recent patches, each one says what client it came from. You can reverse
    them here. We should be able to take patches that are close in time
    and keep updating the same data (e.g. a stream of changes as the user
    drags a slider) and collapse them into a single edit for clarity.

        Ways to display patches, using labels and creator/subj icons
        where possible:

          <creator> set <subj>'s <p> to <o>
          <creator> changed <subj>'s <pred> from <o1> to <o2>
          <creator> added <o> to <s> <p>

    raw messages for debugging this client

    ctx urls take you to->
    files, who's dirty, have we seen external changes, notice big
    files that are taking a long time to save

    graph contents. plain rdf browser like an outliner or
    something. clicking any resource from the other displays takes you
    to this, focused on that resource

"""
from twisted.internet import reactor
import twisted.internet.error
import sys, optparse, logging, json, os
import cyclone.web, cyclone.httpclient, cyclone.websocket
sys.path.append(".")
from light9 import networking, showconfig, prof
from rdflib import ConjunctiveGraph, URIRef, Graph
from light9.rdfdb.graphfile import GraphFile
from light9.rdfdb.patch import Patch, ALLSTMTS
from light9.rdfdb.rdflibpatch import patchQuads
from light9.rdfdb import syncedgraph

from twisted.internet.inotify import INotify
logging.basicConfig(level=logging.DEBUG)
log = logging.getLogger()

try:
    import sys
    sys.path.append("../homeauto/lib")
    from cycloneerr import PrettyErrorHandler
except ImportError:
    class PrettyErrorHandler(object):
        pass

class Client(object):
    """
    one of our syncedgraph clients
    """
    def __init__(self, updateUri, label, db):
        self.db = db
        self.label = label
        self.updateUri = updateUri
        self.sendAll()

    def __repr__(self):
        return "<%s client at %s>" % (self.label, self.updateUri)

    def sendAll(self):
        """send the client the whole graph contents"""
        log.info("sending all graphs to %s at %s" %
                 (self.label, self.updateUri))
        self.sendPatch(Patch(
            addQuads=self.db.graph.quads(ALLSTMTS),
            delQuads=[]))

    def sendPatch(self, p):
        return syncedgraph.sendPatch(self.updateUri, p)

class Db(object):
    """
    the master graph, all the connected clients, all the files we're watching
    """
    def __init__(self):
        # files from cwd become uris starting with this. *should* be
        # building uris from the show uri in $LIGHT9_SHOW/URI
        # instead. Who wants to keep their data in the same dir tree
        # as the source code?!
        self.topUri = URIRef("http://light9.bigasterisk.com/")

        self.clients = []
        self.graph = ConjunctiveGraph()

        self.notifier = INotify()
        self.notifier.startReading()
        self.graphFiles = {} # context uri : GraphFile

        self.findAndLoadFiles()

    def findAndLoadFiles(self):
        self.initialLoad = True
        try:
            dirs = [
                "show/dance2012/sessions",
                "show/dance2012/subs",
                "show/dance2012/subterms",
                ]

            for topdir in dirs:
                for dirpath, dirnames, filenames in os.walk(topdir):
                    for base in filenames:
                        self.watchFile(os.path.join(dirpath, base))
                # todo: also notice new files in this dir

            self.watchFile("show/dance2012/config.n3")
            self.watchFile("show/dance2012/patch.n3")
        finally:
            self.initialLoad = False

        self.summarizeToLog()

    def uriFromFile(self, filename):
        if filename.endswith('.n3'):
            # some legacy files don't end with n3. when we write them
            # back this might not go so well
            filename = filename[:-len('.n3')]
        return URIRef(self.topUri + filename)

    def fileForUri(self, ctx):
        assert isinstance(ctx, URIRef), ctx
        if not ctx.startswith(self.topUri):
            raise ValueError("don't know what filename to use for %s" % ctx)
        return ctx[len(self.topUri):] + ".n3"

    def watchFile(self, inFile):
        ctx = self.uriFromFile(inFile)
        gf = GraphFile(self.notifier, inFile, ctx, self.patch, self.getSubgraph)
        self.graphFiles[ctx] = gf
        gf.reread()

    def patch(self, p, dueToFileChange=False):
        """
        apply this patch to the master graph then notify everyone about it

        dueToFileChange if this is a patch describing an edit we read
        *from* the file (such that we shouldn't write it back to the file)

        if p has a senderUpdateUri attribute, we won't send this patch
        back to the sender with that updateUri
        """
        ctx = p.getContext()
        log.info("patching graph %s -%d +%d" % (
            ctx, len(p.delQuads), len(p.addQuads)))

        patchQuads(self.graph, p.delQuads, p.addQuads, perfect=True)
        senderUpdateUri = getattr(p, 'senderUpdateUri', None)
        #if not self.initialLoad:
        #    self.summarizeToLog()
        for c in self.clients:
            if c.updateUri == senderUpdateUri:
                # this client has self-applied the patch already
                continue
            d = c.sendPatch(p)
            d.addErrback(self.clientErrored, c)
        if not dueToFileChange:
            self.dirtyFiles([ctx])
        sendToLiveClients(asJson=p.jsonRepr)

    def dirtyFiles(self, ctxs):
        """mark dirty the files that we watch in these contexts.

        the ctx might not be a file that we already read; it might be
        for a new file we have to create, or it might be for a
        transient context that we're not going to save

        if it's a ctx with no file, error
        """
        for ctx in ctxs:
            g = self.getSubgraph(ctx)

            if ctx not in self.graphFiles:
                outFile = self.fileForUri(ctx)
                self.graphFiles[ctx] = GraphFile(self.notifier, outFile, ctx,
                                                 self.patch, self.getSubgraph)

            self.graphFiles[ctx].dirty(g)

    def clientErrored(self, err, c):
        err.trap(twisted.internet.error.ConnectError)
        log.info("connection error- dropping client %r" % c)
        self.clients.remove(c)
        self.sendClientsToAllLivePages()

    def summarizeToLog(self):
        log.info("contexts in graph (%s total stmts):" % len(self.graph))
        for c in self.graph.contexts():
            log.info("  %s: %s statements" %
                     (c.identifier, len(self.getSubgraph(c.identifier))))

    def getSubgraph(self, uri):
        """
        this is meant to return a live view of the given subgraph, but
        if i'm still working around an rdflib bug, it might return a
        copy

        and it's returning triples, but I think quads would be better
        """
        # this is returning an empty Graph :(
        #return self.graph.get_context(uri)

        g = Graph()
        for s in self.graph.triples(ALLSTMTS, uri):
            g.add(s)
        return g

    def addClient(self, updateUri, label):
        [self.clients.remove(c)
         for c in self.clients if c.updateUri == updateUri]

        log.info("new client %s at %s" % (label, updateUri))
        self.clients.append(Client(updateUri, label, self))
        self.sendClientsToAllLivePages()

    def sendClientsToAllLivePages(self):
        sendToLiveClients({"clients":[
            dict(updateUri=c.updateUri, label=c.label)
            for c in self.clients]})

class GraphResource(PrettyErrorHandler, cyclone.web.RequestHandler):
    def get(self):
        self.write(self.settings.db.graph.serialize(format='n3'))

class Patches(PrettyErrorHandler, cyclone.web.RequestHandler):
    def __init__(self, *args, **kw):
        cyclone.web.RequestHandler.__init__(self, *args, **kw)
        p = syncedgraph.makePatchEndpointPutMethod(self.settings.db.patch)
        self.put = lambda: p(self)

    def get(self):
        pass

class GraphClients(PrettyErrorHandler, cyclone.web.RequestHandler):
    def get(self):
        pass

    def post(self):
        upd = self.get_argument("clientUpdate")
        try:
            self.settings.db.addClient(upd, self.get_argument("label"))
        except:
            import traceback
            traceback.print_exc()
            raise

liveClients = set()
def sendToLiveClients(d=None, asJson=None):
    j = asJson or json.dumps(d)
    for c in liveClients:
        c.sendMessage(j)

class Live(cyclone.websocket.WebSocketHandler):

    def connectionMade(self, *args, **kwargs):
        log.info("websocket opened")
        liveClients.add(self)
        self.settings.db.sendClientsToAllLivePages()

    def connectionLost(self, reason):
        log.info("websocket closed")
        liveClients.remove(self)

    def messageReceived(self, message):
        log.info("got message %s" % message)
        self.sendMessage(message)

class NoExts(cyclone.web.StaticFileHandler):
    # .xhtml pages can be get() without .xhtml on them
    def get(self, path, *args, **kw):
        if path and '.' not in path:
            path = path + ".xhtml"
        cyclone.web.StaticFileHandler.get(self, path, *args, **kw)

if __name__ == "__main__":
    logging.basicConfig()
    log = logging.getLogger()

    parser = optparse.OptionParser()
    parser.add_option('--show',
        help='show URI, like http://light9.bigasterisk.com/show/dance2008',
                      default=showconfig.showUri())
    parser.add_option("-v", "--verbose", action="store_true",
                      help="logging.DEBUG")
    (options, args) = parser.parse_args()

    log.setLevel(logging.DEBUG if options.verbose else logging.INFO)

    if not options.show:
        raise ValueError("missing --show http://...")

    db = Db()

    from twisted.python import log as twlog
    twlog.startLogging(sys.stdout)

    port = 8051
    reactor.listenTCP(port, cyclone.web.Application(handlers=[
        (r'/live', Live),
        (r'/graph', GraphResource),
        (r'/patches', Patches),
        (r'/graphClients', GraphClients),

        (r'/(.*)', NoExts,
         {"path" : "light9/rdfdb/web",
          "default_filename" : "index.xhtml"}),

        ], debug=True, db=db))
    log.info("serving on %s" % port)
    prof.run(reactor.run, profile=False)