Design Issues

In order to create a relatively relevant prototype, one of the main tasks is to be able to duplicate at least some of what the CARMEN people have done with their CAIRN. Part of the current demonstration uses a workflow (Taverna: to co-ordinate a set of web services which are used to sort and classify spikes from a set of spike train data (actually, electrical signals from electrodes). To do this, I need to host a workflow and also deal with the large volumes of data that need to be analysed and stored. After some consideration, I am proposing the architecture shown below:


It’s based around Workflow at the moment, because the CARMEN developers are using Taverna to execute scientific workflows. The use case I had in mind is as follows:

  • The User authenticates using Window LiveID and receives a LiveID token
  • The User then initiates a workflow. This creates a workflow “Context” that is used to store the intermediate results as the workflow executes. I am not sure yet how this will be implemented – it may be that Windows Workflow Foundation provides some of this for me, or it may need to be specially built. Regardless, this context will form the basis for the data exchange between services as they are invoked. The actual data could be stored in memory, in a temporary directory or in some database hidden behind a Web Service.
  • The initial working data is taken from the data storage service and loaded into the workflow Context. In the neuroscience case, this will be a set of spike train data. Again, this is an undefined area as the actual data may not need to be transferred, rather a URI pointing to the initial data could be stored in the Context.
  • Services are invoked as required. These services operate on data within the workflow Context (or pointers to data somewhere elses) and return the results (or, again, pointers to data held in another location) to the Context for subsequent services to access.
  • At all times, services are invoked using the LiveID token that the User obtained at the beginning of the process.
  • The final stage of the workflow will upload the calculation results to the storage Web Service and the unwanted data stored within the execution Context can be deleted.

Now, again, I’m not sure how logical all of this is and how much of this type of functionality is already provided by the Windows Workflow software. Back to reading.


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: