Uploaded image for project: 'Fluid Infusion'
  1. Fluid Infusion
  2. FLUID-4982

Implement "globally asynchronous ginger world" aka "wave of promises" allowing arbitrarily asynchronous progress through the ginger algorithm



    • Icon: Improvement Improvement
    • Resolution: Unresolved
    • Icon: Major Major
    • 5.0
    • None
    • IoC System


      Following on from the "wave of explosions" for FLUID-4925, we need to lift the final restrictions in the IoC framework on the order of operations during component expansion, merging, and instantiation, allowing the component construction process to "suspend" at any arbitrary point where we discover asynchrony (in general I/O) is required.

      This is a very long-standing restriction in the Infusion framework - since the very beginning, it has been a global assumption that every component construction will conclude synchronously. This primarily has been an obstacle to finishing our renderer component architecture, and has led to numerous annoying and onerous restrictions, in particular on dealing with the workflow for template fetching. Discussion recently in fluid-work led to us itemising the two rather unsatisfactory models that we currently have for this:


      The model used in CSpace uses the provisional utility function "fluid.primeCacheFromResources" which is able to expand some component default material in a non-contextual way, and immediately initiate I/O which may then be blocked for later using a "resource class". This is very unsatisfactory since the resource URL cannot be properly IoC contextualised, and the use of "resource classes" is very unnatural for the developer.

      The model used in UIOptions supplied a dedicated "template fetching component" as a supercomponent of the entire panel, which centralises all of the template URLs used in one place, which again blocks subcomponents. This is similarly a weak model since this creates a central point of dependency and forces developers to an unnatural component architecture.

      It should be possible for any number of cooperating components in a tree to reveal the URLs for templates they require at any suitable point in the ginger process, which may then cause the parts of their instantiation which depend on them (for example, their DOM binder, if it lies within the rendered material) to block, whilst at the same time allowing the ginger process to make good use of its time by progressing through component material which is not so blocked.

      This will require adoption of a framework-wide standard on "promises" - similar to the "special values" held in the tree for FLUID-4978, these "partially evaluated trees" will hold promises at the points where evaluation is currently blocked - any further references dispatched into these promises before they are resolved will likewise block, resulting in a "chain of promises" to be resolved in a wave when the I/O completes.

      A very simple implementation of promises will suffice, since we do not expect this API to be exposed to end users - and all of the algorithms expressed relative to promises are already embodied within the framework. The "unscriptable" team responsible for cujo/wire etc. have produced a suitably minimal implementation at https://github.com/unscriptable/promises/blob/master/src/Tiny2.js - even these 40 lines may implement more functionality than required. We should take the opportunity at this point to deal with FLUID-4883 and implement the new event type, implemented in terms of this new promises abstraction, for "latched events" as described there. At the same time we should reimplement fluid.fetchResources so that it is both a consumer and producer of promises.

      Having proceeded to this implementation, we will be in possession of a generalised evaluation engine suitable for "arbitrary parallelisation of irregular algorithms" as described in the PhD thesis of Milind Kulkarni ( https://engineering.purdue.edu/~milind/docs/dissertation.pdf ) and as implemented in "The Galois System". Although JavaScript is single-threaded, our system will be able to take advantage of arbitrary parallelism by dispatching work to other cores or machines using a "message-passing model" similar to Erlang's, since our model implies (although does not generally enforce) that the operation of "invokers" is as pure functions (side-effect free) - all update of state is managed within the core framework within the "fits and waves" of the ginger process.




            antranig Antranig Basman
            antranig Antranig Basman
            0 Vote for this issue
            2 Start watching this issue