Discussion:
[OSC_dev] IRCAM meeting, finding a standard for OSC query
Trond Lossius
2010-08-17 21:23:47 UTC
Permalink
Hi Gaspard, Jamie and Gabriel,


I was taking a quick look at the current state of oscit. I've unfortunately been to bugged down with work over the last months to be able to participate as much as I'd like in the discussions here after the GDIF/SpatDIF meeting at IRCAM.

Without having been all though the oscit proposal I see at least one proble that IMHO would be worth considering further, and that is the getter method. Quoting:

<quote>

get properties

To get the value of a proprety, simply call the url with no argument:

/some/object/property

</quote>

This would be very difficult to combine with how we use OSC in Jamoma, as we not only have parameters containing a state, but also stateless objects implemented using the jcom.message Max external. These objects are often addressed using an OSC message with no arguments. Seeing this I started wondering what the OSC spec says in this regard, to make sure that our implementation for Jamoma is not in violation of the OSC spec in this regard.

I find the following at http://opensoundcontrol.org/spec-1_0

<quote>

An OSC message consists of an OSC Address Pattern followed by an OSC Type Tag String followed by zero or more OSC Arguments.

</quote>

The get properties proposal in oscit effectively rules out the possibility of using OSC messages with no arguments, as such a message will instead be interpreted as a query to get the property. Hence it will enforce a restriction as compared to what kind of nodes OSC itself permits.

Furthermore I am also unsure about the proposed solution for returning values. When all returned values are using /.reply, they will always be broadcasted. Without having any specific examples on hand I am concerned that at times it might be desirable to be able to be more specific about where the reply is routed, with the possibility of being able to reduce OSC network traffic the same way as a switch do compared to a hub.

I am also not necessarily convinced that it is a good idea not being able to prepend OSC node names to the reply OSC address. If you e.g. do WaveField synthesis or multichannel video processing with the same app running on several computers, you might want to have one master mechanism controlling them all. If this queries for the state on the different computers, wouldn't it be natural to prepend the name of the computers to the address of the reply message, in order to be able to distinguish them from each other?


My apologies if these points have been brought up already, but they seem worth taking into consideration.

And generally, as I stated in Paris, if we really want to work towards a protocol on top of OSC for communication between OSC nodes that eventually could be embraced by a wider community, I believe that it is really important to get CNMAT involved in the process.


Best,
Trond
Gaspard Bucher
2010-08-18 09:26:46 UTC
Permalink
Hi Trond et al,

There are many topics in this email. Let's go through each of them:

I think the get/set and reply issues you mention (calls without arguments,
avoiding unnecessary replies) are related and even though the dragon has not
hit me yet, I have been seeing it coming. I think the problem comes from a
lack of clarification between "control" and "data".

I have named "control" any message that changes the way some device is
processing the data ( = buttons and sliders in a GUI). The "data" is what we
do not see usually in a graphical interface (sound, video frames, midi
notes, captor data, etc).

So the problem is that we do not really need "data" messages to produce
replies and if we do, we do not want the replies to be broadcasted. On the
other hand, "control" messages MUST be broadcasted so that all remote
controls stay in sync, be they GUI or some processing master.

This difference also helps to distinguish between "registration" (get all
control changes) and "subscription" (get a specific data feed).

This means that a resource can be accessed for different needs (maybe not
providing all of them)

1. get state
2. set state
3. subscribe
4. data sink

Oscit currently implements these with:

1. /some/url
2. /some/url [new value] ----> broadcasted reply when called from network
(outside app)
3. /some/out/url
4. /some/url [data] ----> broadcasted reply when called from network
(outside app)

If we want to avoid the confusion between 2 and 4, we could use the
following convention:

PROPOSITION A:
---------------------------
1. /some/url/get
2. /some/url/set
4. /some/url

Point [3] is a little special because it is used to manage subscriptions
(outgoing links in a patch) and can be discussed later.

PROPOSITION B:
---------------------------
1. /osc/get /some/url
2. /osc/set /some/url
4. /some/url

I am also not necessarily convinced that it is a good idea not being able to
Post by Trond Lossius
prepend OSC node names to the reply OSC address. If you e.g. do WaveField
synthesis or multichannel video processing with the same app running on
several computers, you might want to have one master mechanism controlling
them all. If this queries for the state on the different computers, wouldn't
it be natural to prepend the name of the computers to the address of the
reply message, in order to be able to distinguish them from each other?
This issues only happens on serial communication channels and I think should
be solved on the transport layer. When we use UDP or TCP/IP, we know where
the reply comes from. Oscit handles this very well when used with a single
GUI managing multiple remote applications (same as what you describe 1
master, many slaves).
Post by Trond Lossius
And generally, as I stated in Paris, if we really want to work towards a
protocol on top of OSC for communication between OSC nodes that eventually
could be embraced by a wider community, I believe that it is really
important to get CNMAT involved in the process.
I agree, but I think I understood that they do not have the resources right
now to do such work. They can always jump in once we have a request for
comments.

Cheers,

Gaspard
Ross Bencina
2010-08-20 06:08:30 UTC
Permalink
Hi Guys
On the other hand, "control" messages MUST be broadcasted so that all remote controls stay in sync, be they GUI or some processing master.
Keep in mind that UDP is inherently unreliable. Depending on any message (either a /set or /get reply) actually being delivered is unrealistic. The statement above seems to assume that there is guaranteed message delivery. Am I missing something?

The obvious solutions are:

A- value updates always need to be continuously streamed so that even if individual values get dropped, the receiver will get the most recent value, eventually.

B- some kind of ack/resend protocol (client could just resend /get after a timeout).

D- combination of the two (stream until all clients ack).

In any case, saying that messages "MUST be broadcasted" doesn't sit well with either of these strategies (some clients might want (A) some clients might want (B)).

This is all related to the "data" vs. "control" model below, but I don't think it's equivalent -- perhaps all "data" streams would use (A) but control streams could use either -- some control sources are inherently streaming (accellerometer data?), some are atomic and stable (toggle switch transitions).

Ross.





===================================
Perform, Compose, Mangle
AudioMulch 2.0 modular audio software for PC and Mac
http://www.audiomulch.com
----- Original Message -----
From: Gaspard Bucher
To: Developer's list for the OpenSound Control (OSC) Protocol
Sent: Wednesday, August 18, 2010 7:26 PM
Subject: Re: [OSC_dev] IRCAM meeting, finding a standard for OSC query


Hi Trond et al,


There are many topics in this email. Let's go through each of them:


I think the get/set and reply issues you mention (calls without arguments, avoiding unnecessary replies) are related and even though the dragon has not hit me yet, I have been seeing it coming. I think the problem comes from a lack of clarification between "control" and "data".


I have named "control" any message that changes the way some device is processing the data ( = buttons and sliders in a GUI). The "data" is what we do not see usually in a graphical interface (sound, video frames, midi notes, captor data, etc).


So the problem is that we do not really need "data" messages to produce replies and if we do, we do not want the replies to be broadcasted. On the other hand, "control" messages MUST be broadcasted so that all remote controls stay in sync, be they GUI or some processing master.


This difference also helps to distinguish between "registration" (get all control changes) and "subscription" (get a specific data feed).


This means that a resource can be accessed for different needs (maybe not providing all of them)


1. get state
2. set state
3. subscribe
4. data sink


Oscit currently implements these with:


1. /some/url
2. /some/url [new value] ----> broadcasted reply when called from network (outside app)
3. /some/out/url
4. /some/url [data] ----> broadcasted reply when called from network (outside app)


If we want to avoid the confusion between 2 and 4, we could use the following convention:


PROPOSITION A:
---------------------------
1. /some/url/get
2. /some/url/set
4. /some/url


Point [3] is a little special because it is used to manage subscriptions (outgoing links in a patch) and can be discussed later.


PROPOSITION B:
---------------------------
1. /osc/get /some/url
2. /osc/set /some/url
4. /some/url


I am also not necessarily convinced that it is a good idea not being able to prepend OSC node names to the reply OSC address. If you e.g. do WaveField synthesis or multichannel video processing with the same app running on several computers, you might want to have one master mechanism controlling them all. If this queries for the state on the different computers, wouldn't it be natural to prepend the name of the computers to the address of the reply message, in order to be able to distinguish them from each other?



This issues only happens on serial communication channels and I think should be solved on the transport layer. When we use UDP or TCP/IP, we know where the reply comes from. Oscit handles this very well when used with a single GUI managing multiple remote applications (same as what you describe 1 master, many slaves).

And generally, as I stated in Paris, if we really want to work towards a protocol on top of OSC for communication between OSC nodes that eventually could be embraced by a wider community, I believe that it is really important to get CNMAT involved in the process.




I agree, but I think I understood that they do not have the resources right now to do such work. They can always jump in once we have a request for comments.


Cheers,


Gaspard
Gaspard Bucher
2010-08-20 07:46:51 UTC
Permalink
Hi Ross and Co !
Post by Ross Bencina
Hi Guys
On the other hand, "control" messages MUST be broadcasted so that all
remote controls stay in sync, be they GUI or some processing master.
Keep in mind that UDP is inherently unreliable. Depending on any message
(either a /set or /get reply) actually being delivered is unrealistic. The
statement above seems to assume that there is guaranteed message delivery.
Am I missing something?
No, the message above just says that a control change must be broadcasted,
not received. If a slider is moved, this generates between 4 to 100 "/set"
messages and they should all be broadcasted.
Post by Ross Bencina
A- value updates always need to be continuously streamed so that even if
individual values get dropped, the receiver will get the most recent value,
eventually.
B- some kind of ack/resend protocol (client could just resend /get after a timeout).
D- combination of the two (stream until all clients ack).
In any case, saying that messages "MUST be broadcasted" doesn't sit well
with either of these strategies (some clients might want (A) some clients
might want (B)).
This is all related to the "data" vs. "control" model below, but I don't
think it's equivalent -- perhaps all "data" streams would use (A) but
control streams could use either -- some control sources are inherently
streaming (accellerometer data?), some are atomic and stable (toggle switch
transitions).
Ross.
I do not understand the difference between A and B: in a GUI, when a user
moves a slider, the application has no idea wether this is the "final" value
or just a transition so it must send all changes ==> stream of "/set" = [A].

And as you mention, a toggle switch = [B].
Ross Bencina
2010-08-21 06:41:43 UTC
Permalink
Hi Gaspard

First a couple of points:

1. Real-time synchronisation of distributed state is a problem that has been widely studied. Not least in the VR/simulations/gaming sphere and also in the CSCW community. A number of solutions have been proposed and we can run through some of them if you like. There are similar issues addressed in real-time transport of multimedia data (John Lazzaro's RTP MIDI packetisation and Nack-Oriented Reliable Multicast are two examples). TUIO (a widely used OSC-based protocol) draws on ideas from Lazzaro's RTP MIDI packetisation for example -- it has been used in distributed applications (such as the distributed reacTable at ICMC 2005) and worked well between Barcelona and Linz, in spite of packet loss.

2. You wrote:
In any case, if we need clients to get ack replies on whether their control change has reached the target, we should use TCP/IP and not try to reinvent the transport layer. But this can be decided later once we have real world use cases with dropped messages.
<<<

It's difficult to take you seriously if you think there are not problems with dropped messages and UDP. Loss of UDP packets is common on the internet and over wireless networks. There are enough people using OSC over wireless LANs to give a real-world use case of wireless UDP. Dropped packet issues are much worse if broadcast or multicast UDP is used.

I disagree that TCP is the solution. TCP is not a real-time protocol. A lost packet in a TCP stream will stall the pipeline until the retransmit has been processed -- on long-haul or wireless networks this can result in significant delays. There are other, better, mechanisms you can use to guarantee data synchronisation in real-time protocls (see for example the ideas I quoted above).

Please let me know what you think. If you are basing this whole project on the assumption that UDP is a reliable transport (or that the protocol you're designing requires a reliable transport) then I think stakeholders need to understand that -- it's a pretty fundamental assumption and one that I would prefer not to make.

Now, on to your other points.
No, the message above just says that a control change must be broadcasted, not received. If a slider is moved, this generates between 4 to 100 "/set" messages and they should all be broadcasted.
<<<

Ok, fair enough. But in that case, if you don't care about delivery, then you shouldn't care whether every message is sent -- perhaps the sender might want to apply some data thinning? why prohibit it?
I do not understand the difference between A and B: in a GUI, when a user moves a slider, the application has no idea wether this is the "final" value or just a transition so it must send all changes ==> stream of "/set" = [A].
<<<<

In (A) data is continuously retransmitted _even if there is no change_. That way the reciever will always stabilise on the most recent value even if packets are dropped. If you only stream the changes, and then stop sending, and the last packet(s) are lost, then the reciever will have a different value than the sender and they will be out of sync.
And as you mention, a toggle switch = [B].
<<<<

Actually a toggle switch could be done either way.
Gaspard Bucher
2010-08-21 10:54:24 UTC
Permalink
Hi Ross,

(replies below)
Post by Trond Lossius
Hi Gaspard
1. Real-time synchronisation of distributed state is a problem that has
been widely studied. Not least in the VR/simulations/gaming sphere and also
in the CSCW community. A number of solutions have been proposed and we can
run through some of them if you like. There are similar issues addressed in
real-time transport of multimedia data (John Lazzaro's RTP MIDI
packetisation and Nack-Oriented Reliable Multicast are two examples). TUIO
(a widely used OSC-based protocol) draws on ideas from Lazzaro's RTP MIDI
packetisation for example -- it has been used in distributed applications
(such as the distributed reacTable at ICMC 2005) and worked well between
Barcelona and Linz, in spite of packet loss.
Thanks for the pointers. I read the part concerning packet recovery in RTP
Ross Bencina
2010-08-22 09:33:24 UTC
Permalink
Hi Gaspard

You wrote:
Thanks for the pointers. I read the part concerning packet recovery in RTP
midi and added the links on the xdif wiki on the "set/get" page.
Gaspard Bucher
2010-08-22 11:57:08 UTC
Permalink
Hi Ross,

I will attend a master class on Kyma X in Geneva on the 29th of September
given by Carla Scaletti and Kurt Hebel (@ Cie Gilles Jobin). I will try to
contact them beforehand to see if we can find some time to talk about the
protocol issues.

On the EHSNR versus timestamp, the goal of the serial number is to easily
detect a missed packet (which requires a lookup in the journal).

I like your idea to keep only the most recent values in the journal. We
could use an attribute on the url to set if controls can be squashed or not.

I started a page on the wiki (
http://xdif.wiki.ifi.uio.no/Reliable_communication) to describe the
different options to manage packet loss recovery.

I think we will need some use case scenarios to help us approximate packet
size, network bandwidth and such before we decide on an implementation.

On the slider fighting side, I really like the idea of takeover/release but
it might be hard to implement for silly (but eventually useful) cases where
we have multiple non-human sources (captors, oscillators, etc) sending
changes to the same target. Moreover, a lost "release" message (failing
hardware, device power off before release) can be disastrous.

I'll keep you posted on the contacts with the people from Kyma.

Gaspard

On Sun, Aug 22, 2010 at 11:33 AM, Ross Bencina
Post by Trond Lossius
Hi Gaspard
Thanks for the pointers. I read the part concerning packet recovery in RTP
Adrian Freed
2010-08-22 16:06:59 UTC
Permalink
On the slider fighting side, I really like the idea of takeover/release but it might be hard to implement for silly (but eventually useful) cases where we have multiple non-human sources (captors, oscillators, etc) sending changes to the same target. Moreover, a lost "release" message (failing hardware, device power off before release) can be disastrous.
The solution to these sorts of problems is a "lease". The idea is there is an implied "release" at some short time in the future that can be deferred by subsequent messages.
Then if the sender fails the desired state is returned to. This concept-if carefully thought through - avoids a lot of problems people encounter attempting to synchronize state between two or more distributed systems. In fact in many situations you can avoid polling to try to synchronize states (an impossibility in practice anyway) altogether.
Andy W. Schmeder
2010-08-25 05:41:18 UTC
Permalink
Hi.


Assuming that we are going to adhere to the not-invented-here principle, I believe that most users of OSC would prefer a continuous retransmission system for its conceptual and practical simplicity, rather than a recovery journal.

That said there are numerous protocols from the IETF that handle such requirements without resorting to TCP. For example SCTP, RUDP, etc. In the 2004 meeting Lazzaro showed how to use the RTP family to tunnel OSC (slides should still be online at opensoundcontrol.org). Note that he did not recommend tunneling OSC over RTP-MIDI.

In the AVBC work we put a control for minimum and maximum packet rates on stream subscriptions. This can be seen as a frequency-bandwidth limitation on control signals. I believe this mitigates many potential problems including feedback loops and also provides crude bandwidth management and advises the clients what peak sample rate the endpoint might actually support.

Also, for those concerned about bandwidth, it is a simple matter to implement an adaptive sampling filter that only forwards a message if the minimum retransmit interval elapses, or if >n bits change in the data portion--for integer data usually it will be n=0 and for floating point one can use a moving estimate of the noise to determine when the signal portion changes. My informal experiments with this found that the approach easily achieves compression rates of 10:1 or better on typical "gesture" data streams where the source has a relatively high sample rate (e.g. 1000hz). Correct implementation of this requires timestamps as does any use of asynchronous sampling rates.

Finally, the AVB working group has a defined target for how fast their network audio control system should recover its connection-graph after a node has been reset. I think its around 10 seconds? This gives a rough idea of how often the stream subscriptions need to be retransmitted.
Post by Adrian Freed
On the slider fighting side, I really like the idea of takeover/release but it might be hard to implement for silly (but eventually useful) cases where we have multiple non-human sources (captors, oscillators, etc) sending changes to the same target. Moreover, a lost "release" message (failing hardware, device power off before release) can be disastrous.
The solution to these sorts of problems is a "lease". The idea is there is an implied "release" at some short time in the future that can be deferred by subsequent messages.
Then if the sender fails the desired state is returned to. This concept-if carefully thought through - avoids a lot of problems people encounter attempting to synchronize state between two or more distributed systems. In fact in many situations you can avoid polling to try to synchronize states (an impossibility in practice anyway) altogether.
---

Andy W. Schmeder
email: andy [at] cnmat.berkeley.edu
skype: andy.schmeder

Programmer/Analyst II
Research Group
Center for New Music and Audio Technologies
University of California at Berkeley
http://cnmat.berkeley.edu
Gaspard Bucher
2010-08-25 06:48:28 UTC
Permalink
Andy W. Schmeder
2010-08-25 19:13:12 UTC
Permalink
From my understanding of the continuous retransmission system, this works fine for state-based resources (controls) or continuous data flows where a missed packet does not have a huge incidence (the data flow acts as retransmission when the frequency is high enough).
But how do we deal with a lost packet such as "NoteOff" ? Since the midi message frequency can be higher then the retransmission frequency, does this mean that the "NoteOff" would never be retransmitted ? Any idea ?
"note on" and "note off" are state-transitions, not states, so they can't be used with the REST architecture (http://en.wikipedia.org/wiki/Representational_State_Transfer).


Here are the options, then, as I see them:


1. Make a filter that decodes the state transitions into a state representation. Then use the filter prior to transmission, and since there is no possibility of a lost packet at this point, the implementation is relatively simple. It turns out that this filter has to be written anyways, so no programmer-time is saved by not choosing this option.

2. Make a filter, comprising the functionality of the filter in #1, but is also robust to lost messages so it can operate on the receiving end. To get this robust behavior correct will probably compromise some functionality, possibly to the point of violating the system requirements. Also, adequate testing is rather hard because the number of possible sequences of lost or mis-ordered messages can be very large.

3. Use a reliable (assured) transport at a lower layer (such as TCP, SCTP, RUDP...), and run the filter of #1 on the receiving side.

4. As in #3, but invent a reliable transport mechanism in the current layer (i.e., in OSC).


Option #1 might use more bandwidth, thus the compression strategies if that is a concern.

Option #3 may be undesirable for some reason (e.g. limited capability network stack or a preference for minimal dependencies).

Assuming that #3 is rejected and #1 is infeasible for some reason, option #2 and #4 are essentially solving the same problem but #4 is re-usable.



---

Andy W. Schmeder
email: andy [at] cnmat.berkeley.edu
skype: andy.schmeder

Programmer/Analyst II
Research Group
Center for New Music and Audio Technologies
University of California at Berkeley
http://cnmat.berkeley.edu
Ross Bencina
2010-08-25 08:19:58 UTC
Permalink
Post by Andy W. Schmeder
Assuming that we are going to adhere to the not-invented-here principle, I
believe that most users of OSC would prefer a continuous retransmission
system for its conceptual and practical simplicity, rather than a recovery
journal.
Hi Andy

Would you mind filling us in one which particular aspect of this discussion
is adhering (or not) to the NIH principle? I'm a bit lost.

Thanks

Ross.
Andy W. Schmeder
2010-08-25 19:46:51 UTC
Permalink
Post by Ross Bencina
Would you mind filling us in one which particular aspect of this discussion
is adhering (or not) to the NIH principle? I'm a bit lost.
My observation is that suggestions in this group to use other protocols to accomplish some needed functionality don't live very long. My intuition is that the reason for avoiding such things is that there is a very strong preference for 1) minimal dependencies, and 2) lightweight implementations. Those are fine motivations, but if one adds "reliable delivery" or "security" as a requirement then it becomes difficult to keep the other two desires in balance.

Thus our strategy at CNMAT is to design systems that don't depend on such things, although this habit doesn't come without some relearning because its different from the common procedural style of software engineering.

Since HTTP is often used as the canonical REST-ful protocol, note that it also does not support reliability nor security--it leverages other protocols for those requirements.

This is a bit of an aside, but note that HTTP isn't necessarily an *ideal* example, it has its own set of problems, for example its POST method is notoriously problematic since it implies a non-repeatable state-transition. The complexity of code in a typical web application server needed to fix this design flaw is staggering--you may have even seen desperate pleas by web programmers on web forms such as "please don't click this button twice!", its that hard to fix.


---

Andy W. Schmeder
email: andy [at] cnmat.berkeley.edu
skype: andy.schmeder

Programmer/Analyst II
Research Group
Center for New Music and Audio Technologies
University of California at Berkeley
http://cnmat.berkeley.edu
Trond Lossius
2010-08-20 13:03:40 UTC
Permalink
Hi Gaspard,

First on control and data: I agree that this distinction is required, generally this overlaps with the distinction between control rate and signal rate in many audio processing programs, and between video and control data in video algorithms. Not that it really matters to this discussion, but most of the time I consider MIDI to be control data (e.g. controlling a synth process). And the discussion we are into is related to OSC and control data. If we wanted to pass the signals themselves around, we would most likely rather have to look into options for streaming media.

Concerning the question of broadcasting versus using subscription-based communication, I have so far mainly used OSC inside of one application; Max/Jamoma. Up until now we have been using broadcasting for announcing states. E.g. if a cueScript module queries all of the system for their current value, all parameters responds with the same OSC message, giving their own address as the first argument. This seems to be similar to the oscit proposition that I was reading up on when responding to this thread. Likewise, if we have a mapping module that continuosly map one parameter to another, all parameters will be continuously broadcasting their changes, and the mapping module will be filtering the broadcasted stream looking for the changes it is actually interested in. A final example is a module continuously recording any changes to parameters of the system. This could be used to record e.g. an improvisation, that later on can be played back and further edited.

The downside we have experienced from the broadcasting, is that there's a lot of overhead involved, that in a large patch with many parameters severely can slow the system down. For this reason Theo de la Hogue is currently working on the NodeLib. Here are two documents on it from the Jamoma redmine wiki:

http://redmine.jamoma.org/projects/modular/wiki/NodeLib
http://redmine.jamoma.org/projects/modular/wiki/Everything_about_the_NodeLib

The basic idea is that the NodeLib will function as a central making the connections required but avoiding unnecessary overhead where possible. As such it can be conceived as a kind of OSC switch, making selective connections between nodes instead of passing everything everywhere. From time to time broadcasts might still be necessary (e.g. for the recording module), but it should speed up and trim OSC communications in a way that we expect will give performance benefits.

Your use of OSC communication might differ substantially from this, and I guess the question of e.g. a server-based versus peer-to-peer communication topology will radically change how OSC queries will have to be done. Maybe it would be an idea to develop a number of scenarios or sue cases that could help illuminate the different needs.

Finally, concerning the propositions, the distinctions you now suggest would help. Based on your suggestion, and what we currently do in Jamoma, I would like to suggest modifying it to e.g:

1. get state => /some/url/get

The /get addition for (1) would be a huge improvement to us as compared to /some/url. This also ensures that if a get state request is sent to a oscit-non-complient system, the address won't fit the internal namespace, and hence hopefully be ignored or throw an error message

2. set state => /some/url

This is the most common task, and also the one that has been around for long. Any non-oscit-complient application is likely to support this, so we ensure backwards compatibility.

3. subscribe => /some/url/subscribe

This one makes a lot of sense to me.

4. data sink => /some/url/

For the data sink I can imagine either using the same command as for set state, or use a fourth candidate, e.g. /some/url/updated

In Jamoma we use the same as for get state. Developers will have to ensure to avoid a feedback loop if it is broadcasted and picked up again by the same application, but the resulting namespace that can be accessed and used for mappings e.g. using the OSC-map application, will be a tidy one.

BTW: Have you looked into CooperLan? I know Pascal Baltazar was really impressed with their design decisions, wo it might be worth studying.

http://www.copperlan.org

Best,
TRond
Gaspard Bucher
2010-08-20 14:12:38 UTC
Permalink
Post by Trond Lossius
Hi Gaspard,
First on control and data: I agree that this distinction is required,
generally this overlaps with the distinction between control rate and signal
rate in many audio processing programs, and between video and control data
in video algorithms. Not that it really matters to this discussion, but most
of the time I consider MIDI to be control data (e.g. controlling a synth
process). And the discussion we are into is related to OSC and control data.
If we wanted to pass the signals themselves around, we would most likely
rather have to look into options for streaming media.
I agree. I think we need some conventions and attributes to announce media
streams with RTP or other means but this is another issue (
http://xdif.wiki.ifi.uio.no/Subscription).
Post by Trond Lossius
Concerning the question of broadcasting versus using subscription-based
communication, I have so far mainly used OSC inside of one application;
Max/Jamoma. Up until now we have been using broadcasting for announcing
states. E.g. if a cueScript module queries all of the system for their
current value, all parameters responds with the same OSC message, giving
their own address as the first argument. This seems to be similar to the
oscit proposition that I was reading up on when responding to this thread.
Likewise, if we have a mapping module that continuosly map one parameter to
another, all parameters will be continuously broadcasting their changes, and
the mapping module will be filtering the broadcasted stream looking for the
changes it is actually interested in. A final example is a module
continuously recording any changes to parameters of the system. This could
be used to record e.g. an improvisation, that later on can be played back
and further edited.
The downside we have experienced from the broadcasting, is that there's a
lot of overhead involved, that in a large patch with many parameters
severely can slow the system down. For this reason Theo de la Hogue is
currently working on the NodeLib. Here are two documents on it from the
http://redmine.jamoma.org/projects/modular/wiki/NodeLib
http://redmine.jamoma.org/projects/modular/wiki/Everything_about_the_NodeLib
The basic idea is that the NodeLib will function as a central making the
connections required but avoiding unnecessary overhead where possible. As
such it can be conceived as a kind of OSC switch, making selective
connections between nodes instead of passing everything everywhere. From
time to time broadcasts might still be necessary (e.g. for the recording
module), but it should speed up and trim OSC communications in a way that we
expect will give performance benefits.
Your use of OSC communication might differ substantially from this, and I
guess the question of e.g. a server-based versus peer-to-peer communication
topology will radically change how OSC queries will have to be done. Maybe
it would be an idea to develop a number of scenarios or sue cases that could
help illuminate the different needs.
My situation is reverse: currently no control changes that happen internally
are broadcasted (connections are just function calls). This is fast but not
very nice for external devices. I think there is no best architecture and
this is why I it makes sense to use the same distinction data/control for
internal communications. Sometimes we need fast (but non-followable) signals
and sometimes we want to pay the broadcasting overhead because we need
external devices to follow the changes.
Post by Trond Lossius
Finally, concerning the propositions, the distinctions you now suggest
would help. Based on your suggestion, and what we currently do in Jamoma, I
1. get state => /some/url/get
The /get addition for (1) would be a huge improvement to us as compared to
/some/url. This also ensures that if a get state request is sent to a
oscit-non-complient system, the address won't fit the internal namespace,
and hence hopefully be ignored or throw an error message
2. set state => /some/url
This is the most common task, and also the one that has been around for
long. Any non-oscit-complient application is likely to support this, so we
ensure backwards compatibility.
This means all changes on "/some/url" are broadcasted (control), right ?
Post by Trond Lossius
3. subscribe => /some/url/subscribe
This one makes a lot of sense to me.
This is only for data sources, right ? Registration for control changes is
done for the whole device with "/osc/register".
Post by Trond Lossius
4. data sink => /some/url/
For the data sink I can imagine either using the same command as for set
state, or use a fourth candidate, e.g. /some/url/updated
As I said above, I think we need explicit distinction between signal targets
that are "data sinks" from "control changes" because this is where the
notification will take place or not.

I liked the "/set" postfix for control changes (with notification) and the
raw url for data sink (without notification). I think this way compatibility
is even better: old applications can receive messages but will typically not
notify, hence the default to "data sink".

On an implementation perspective the "/set" postfix is interesting because
the "controller" who will also be in charge of notifications can easily
detect the "control" messages, strip the "/set" postfix, call "/some/url",
call "/some/url/get" and notify. This means that objects just need two
methods "/some/url" (data sink) and "/some/url/get" (get current state) in
order to act both as control and data. Not implementing "/get" means that
the resource is a data sink only (impulse for example).
Post by Trond Lossius
In Jamoma we use the same as for get state. Developers will have to ensure
to avoid a feedback loop if it is broadcasted and picked up again by the
same application, but the resulting namespace that can be accessed and used
for mappings e.g. using the OSC-map application, will be a tidy one.
BTW: Have you looked into CooperLan? I know Pascal Baltazar was really
impressed with their design decisions, wo it might be worth studying.
I looked at CooperLan but could not find the protocol description on their
website... Moreover, since their protocol is probably well protected by all
kinds of copyrights and patents and ugly monsters, I am not sure I want to
put my head in the dragon...

Cheers,

Gaspard
Jamie Bullock
2010-09-02 12:02:32 UTC
Permalink
Hi All,

Since I was involved in some of the original discussions on oscit, it may be useful for me to describe the subsequent decisions we took in Integra. I guess some people on this list may be interested...

Basically, our original intention was to use OSC as our sole protocol for communication between our server/host and clients such as the Integra Live GUI. We wanted to allow for introspection of module attributes and attribute state, so that clients could build a model of the server state without a priori knowledge. To achieve this we started thinking about bi-directional OSC with query/response. Since our protocol was similar to the work done by Gaspard in rubyk, we contributed some ideas into oscit.

However, shortly after the oscit discussions, Integra abandoned the notion of any kind query over OSC. It seemed that were simply duplicating functionality available in other protocols, but in a less robust way and without any real advantage. We therefore decided to adopt a multi-protocol approach, using OSC for communications where low latency is important but guaranteed packet delivery and ordering is less vital, and XMLRPC for bi-directional communications.

The specification for our XMLRPC protocol can be found at: http://www.integralive.org/incoming/api.html. The protocol allows for full CRUD on the server, as well as notification and query methods. Our OSC interface allows for attributes to be set only (equivalent to command.set() in the XMLRPC API). This solution allows for a high degree of introspection (via XMLRPC) with the option of low-latency control via OSC if required. It may not work well for other projects with different requirements, but it works for us, so I thought I'd mention it! If anyone wants to use the protocol, or the library that implements it: libIntegra, you are welcome. libIntegra is in the project svn under library/trunk. http://sourceforge.net/projects/integralive/

All best,

Jamie


--
http://www.jamiebullock.com
Post by Trond Lossius
Hi Gaspard, Jamie and Gabriel,
I was taking a quick look at the current state of oscit. I've unfortunately been to bugged down with work over the last months to be able to participate as much as I'd like in the discussions here after the GDIF/SpatDIF meeting at IRCAM.
<quote>
get properties
/some/object/property
</quote>
This would be very difficult to combine with how we use OSC in Jamoma, as we not only have parameters containing a state, but also stateless objects implemented using the jcom.message Max external. These objects are often addressed using an OSC message with no arguments. Seeing this I started wondering what the OSC spec says in this regard, to make sure that our implementation for Jamoma is not in violation of the OSC spec in this regard.
I find the following at http://opensoundcontrol.org/spec-1_0
<quote>
An OSC message consists of an OSC Address Pattern followed by an OSC Type Tag String followed by zero or more OSC Arguments.
</quote>
The get properties proposal in oscit effectively rules out the possibility of using OSC messages with no arguments, as such a message will instead be interpreted as a query to get the property. Hence it will enforce a restriction as compared to what kind of nodes OSC itself permits.
Furthermore I am also unsure about the proposed solution for returning values. When all returned values are using /.reply, they will always be broadcasted. Without having any specific examples on hand I am concerned that at times it might be desirable to be able to be more specific about where the reply is routed, with the possibility of being able to reduce OSC network traffic the same way as a switch do compared to a hub.
I am also not necessarily convinced that it is a good idea not being able to prepend OSC node names to the reply OSC address. If you e.g. do WaveField synthesis or multichannel video processing with the same app running on several computers, you might want to have one master mechanism controlling them all. If this queries for the state on the different computers, wouldn't it be natural to prepend the name of the computers to the address of the reply message, in order to be able to distinguish them from each other?
My apologies if these points have been brought up already, but they seem worth taking into consideration.
And generally, as I stated in Paris, if we really want to work towards a protocol on top of OSC for communication between OSC nodes that eventually could be embraced by a wider community, I believe that it is really important to get CNMAT involved in the process.
Best,
Trond
_______________________________________________
OSC_dev mailing list
http://lists.create.ucsb.edu/mailman/listinfo/osc_dev
Gaspard Bucher
2010-09-04 20:16:56 UTC
Permalink
Hi Jamie !

It's nice to hear from you. I fully understand why OSC might not be the best
tool for the query system and have been thinking of using HTTP+XML at least
for operations on the processing tree itself (add a node, change processing,
update a script). For these operations OSC over UDP is clearly not the
easiest choice (MTU is small, no guaranteed delivery).

I still have two questions:

1. Why didn't you use OSC over TCP for the low latency part instead of
XMLRPC ?

2. Since you are probably sending musical events through OSC, the XMLRPC
does not free you from managing lost packets (lost NoteOff for example). How
do you handle this ?

To the rest of the list, I was quite stressed by the MTU of UDP. A typical
lua script can easily contain 4500 bytes, far more then the minimum MTU of
576 for UDP. This means that a save operation (sends the whole script)
should split the packet and then the reordering and rebuilding of the packet
gets complicated.

Maybe we have some kind of nice pattern:

fast com. | reliable com.
A. latency : low | high
B. packet size : small | big
C. reliability : low | high

This would mean that we could simply pick the transport depending on the
need (= all clients and servers must have an UDP and a TCP socket). For
example:

send message: UDP
notification (message received): TCP
query: TCP

This means that if we do not receive a notification for a "NoteOff" event or
"volume change", we send it again.

What do you think ?

Gaspard
Post by Jamie Bullock
Hi All,
Since I was involved in some of the original discussions on oscit, it may
be useful for me to describe the subsequent decisions we took in Integra. I
guess some people on this list may be interested...
Basically, our original intention was to use OSC as our sole protocol for
communication between our server/host and clients such as the Integra Live
GUI. We wanted to allow for introspection of module attributes and attribute
state, so that clients could build a model of the server state without a
priori knowledge. To achieve this we started thinking about bi-directional
OSC with query/response. Since our protocol was similar to the work done by
Gaspard in rubyk, we contributed some ideas into oscit.
However, shortly after the oscit discussions, Integra abandoned the notion
of any kind query over OSC. It seemed that were simply duplicating
functionality available in other protocols, but in a less robust way and
without any real advantage. We therefore decided to adopt a multi-protocol
approach, using OSC for communications where low latency is important but
guaranteed packet delivery and ordering is less vital, and XMLRPC for
bi-directional communications.
http://www.integralive.org/incoming/api.html. The protocol allows for full
CRUD on the server, as well as notification and query methods. Our OSC
interface allows for attributes to be set only (equivalent to command.set()
in the XMLRPC API). This solution allows for a high degree of introspection
(via XMLRPC) with the option of low-latency control via OSC if required. It
may not work well for other projects with different requirements, but it
works for us, so I thought I'd mention it! If anyone wants to use the
protocol, or the library that implements it: libIntegra, you are welcome.
libIntegra is in the project svn under library/trunk.
http://sourceforge.net/projects/integralive/
All best,
Jamie
--
http://www.jamiebullock.com
Post by Trond Lossius
Hi Gaspard, Jamie and Gabriel,
I was taking a quick look at the current state of oscit. I've
unfortunately been to bugged down with work over the last months to be able
to participate as much as I'd like in the discussions here after the
GDIF/SpatDIF meeting at IRCAM.
Post by Trond Lossius
Without having been all though the oscit proposal I see at least one
proble that IMHO would be worth considering further, and that is the
Post by Trond Lossius
<quote>
get properties
/some/object/property
</quote>
This would be very difficult to combine with how we use OSC in Jamoma, as
we not only have parameters containing a state, but also stateless objects
implemented using the jcom.message Max external. These objects are often
addressed using an OSC message with no arguments. Seeing this I started
wondering what the OSC spec says in this regard, to make sure that our
implementation for Jamoma is not in violation of the OSC spec in this
regard.
Post by Trond Lossius
I find the following at http://opensoundcontrol.org/spec-1_0
<quote>
An OSC message consists of an OSC Address Pattern followed by an OSC Type
Tag String followed by zero or more OSC Arguments.
Post by Trond Lossius
</quote>
The get properties proposal in oscit effectively rules out the
possibility of using OSC messages with no arguments, as such a message will
instead be interpreted as a query to get the property. Hence it will
enforce a restriction as compared to what kind of nodes OSC itself permits.
Post by Trond Lossius
Furthermore I am also unsure about the proposed solution for returning
values. When all returned values are using /.reply, they will always be
broadcasted. Without having any specific examples on hand I am concerned
that at times it might be desirable to be able to be more specific about
where the reply is routed, with the possibility of being able to reduce OSC
network traffic the same way as a switch do compared to a hub.
Post by Trond Lossius
I am also not necessarily convinced that it is a good idea not being able
to prepend OSC node names to the reply OSC address. If you e.g. do WaveField
synthesis or multichannel video processing with the same app running on
several computers, you might want to have one master mechanism controlling
them all. If this queries for the state on the different computers, wouldn't
it be natural to prepend the name of the computers to the address of the
reply message, in order to be able to distinguish them from each other?
Post by Trond Lossius
My apologies if these points have been brought up already, but they seem
worth taking into consideration.
Post by Trond Lossius
And generally, as I stated in Paris, if we really want to work towards a
protocol on top of OSC for communication between OSC nodes that eventually
could be embraced by a wider community, I believe that it is really
important to get CNMAT involved in the process.
Post by Trond Lossius
Best,
Trond
_______________________________________________
OSC_dev mailing list
http://lists.create.ucsb.edu/mailman/listinfo/osc_dev
_______________________________________________
OSC_dev mailing list
http://lists.create.ucsb.edu/mailman/listinfo/osc_dev
Jamie Bullock
2010-09-06 16:25:24 UTC
Permalink
Hi!
Post by Gaspard Bucher
Hi Jamie !
It's nice to hear from you. I fully understand why OSC might not be the best tool for the query system and have been thinking of using HTTP+XML at least for operations on the processing tree itself (add a node, change processing, update a script). For these operations OSC over UDP is clearly not the easiest choice (MTU is small, no guaranteed delivery).
1. Why didn't you use OSC over TCP for the low latency part instead of XMLRPC ?
I think either I don't understand your question, or I wasn't clear in my original mail. We're using XMLRPC for CRUD + some other stuff (including 'set()' messages), and OSC over UDP just for set() i.e. the OSC API has a small subset of the capability of the XMLRPC API. The rationale being that in situations where we'd want to use OSC (high data rate control from sensors and other live inputs), we don't care so much about the limitations of UDP, but we do care about latency.
Post by Gaspard Bucher
2. Since you are probably sending musical events through OSC, the XMLRPC does not free you from managing lost packets (lost NoteOff for example). How do you handle this ?
In the specific case of MIDI, we're using... MIDI! Again it was a case of thinking through how to do MIDI-over-OSC, and concluding that 1. we don't need to 2. it has more drawbacks than advantages.

However, in the more general case of soliciting message responses, yes it is possible to use our XMLRPC API. Good example use case is the case of transport start/stop/jump where you want to be pretty damned sure the message got through, but probably don't care so much about latency.
Post by Gaspard Bucher
To the rest of the list, I was quite stressed by the MTU of UDP. A typical lua script can easily contain 4500 bytes, far more then the minimum MTU of 576 for UDP. This means that a save operation (sends the whole script) should split the packet and then the reordering and rebuilding of the packet gets complicated.
(interesting... we're using Lua too... for our scripting language)
Post by Gaspard Bucher
fast com. | reliable com.
A. latency : low | high
B. packet size : small | big
C. reliability : low | high
send message: UDP
notification (message received): TCP
query: TCP
This means that if we do not receive a notification for a "NoteOff" event or "volume change", we send it again.
What do you think ?
This sounds eminently sensible, but reading this list and the threads that keep popping up about OSC query, I do wonder if we're just trying to generalise a problem that really should be left to specific solutions (beware the Turing tar pit!) It seems to me we have a number of developers here each with subtly different needs that can't ever be met through a single query protocol. Although, I'd love to be proved wrong!

Just my 2p...

All best,

Jamie

Loading...