Discussion:
[OSC_dev] "Whats New" in OSC 2.0
Andy W. Schmeder
2011-11-04 20:26:04 UTC
Permalink
The following in an excerpt from the draft document giving my list of design goals and proposed changes.

If your favorite issue is *not* on the list feel free to speak up. OR if your favorite feature is scheduled to be cut, or something else doesn't make sense.

...

The main design goals for OSC 2.0 are.

• Improve the packet format.

• Improve processing and coding efficiency.

• Resolve ambiguous syntax and semantics.

• Establish protocol for discovery, enumeration and control.

• Establish well-defined file format.

• Establish well-defined transformations to other coding schemes (JSON, XML, YAML)


The following are removed, resolved or simplified compared to OSC 1.0:

• Removes the “built-in” timestamp field from bundles.

• Resolves semantics of timestamps for use in temporal processing.

• Resolves syntax and semantics of address pattern expressions in messages.

• Removes requirement for backtracking in processing of the address pattern syntax.

• Resolves an oversight in the definition of the address pattern numeric range expression so that multi-digit index selectors can be used.

• Removes the need for a framing protocol when used with a serial transport.

• Simplifies the packet format so there is only one class of valid packet which is the bundle.

• Resolves ambiguous semantics of nested bundles.

• Removes all type-tags that implicitly carry data (I,T,N,F)

• Removes all improper types from the list of optional types ("improper" is any type with implicit semantics, such as "RGB color")


The following are additions or extensions:

• Adds a version number to the packet header to support future format revisions.

• Adds an options bit-flag field to the packet header to support different low-level encodings.

• Enables encoding of data sections with big- or little-endian byte order.

• Enables encoding of data sections with 4-, 8-, 16-, or 32-byte alignment.

• Adds an optional packet checksum.

• Adds float and integer 64-bit numeric types as mandatory types to the standard.

• Adds a comprehensive collection of optional types.

• Enables encoding of structured content by using nested bundles.

• Adds lookup-table string compression for bandwidth-constrained applications, whereby address and/or typetag strings may be substituted by a numeric identifier.

• Adds a set of predefined addresses and their protocol semantics to enable discovery, enumeration and control.

• Provides a file-format using IANA MIME.

• Provides transformations to other content formats including JSON, XML,YAML.

• Provides recommended practice for ad-hoc discovery on Layer 2 (how???) and Layer 3 (Zeroconf??)



---

Andy W. Schmeder
email: andy [at] cnmat.berkeley.edu
skype: andy.schmeder
mobile: +1-510-717-6653

Programmer/Analyst II
Research Group
Center for New Music and Audio Technologies
University of California at Berkeley
http://cnmat.berkeley.edu
Jeff Koftinoff
2011-11-04 21:27:58 UTC
Permalink
Hi Andy

I have been working on a non-json ascii line based representation format of an OSC message.

For instance:

/input/1/eq/1 f 1.0 f 0.5 f 1000.0


would be the equivalent OSC message
/input/1/eq/1 ,ff 1.0 0.5 1000.0

Allowing this plain format would make it simple for some non-programmable micro controllers/systems to participate in an OSC system.

A device receiving this line would typically generate the equivalent OSC message and then pass the OSC message to the internal OSC receive function.

When I have a working system I can send more complete details!

Regards,
Jeff
Post by Andy W. Schmeder
The following in an excerpt from the draft document giving my list of design goals and proposed changes.
If your favorite issue is *not* on the list feel free to speak up. OR if your favorite feature is scheduled to be cut, or something else doesn't make sense.
...
The main design goals for OSC 2.0 are.
• Improve the packet format.
• Improve processing and coding efficiency.
• Resolve ambiguous syntax and semantics.
• Establish protocol for discovery, enumeration and control.
• Establish well-defined file format.
• Establish well-defined transformations to other coding schemes (JSON, XML, YAML)
• Removes the “built-in” timestamp field from bundles.
• Resolves semantics of timestamps for use in temporal processing.
• Resolves syntax and semantics of address pattern expressions in messages.
• Removes requirement for backtracking in processing of the address pattern syntax.
• Resolves an oversight in the definition of the address pattern numeric range expression so that multi-digit index selectors can be used.
• Removes the need for a framing protocol when used with a serial transport.
• Simplifies the packet format so there is only one class of valid packet which is the bundle.
• Resolves ambiguous semantics of nested bundles.
• Removes all type-tags that implicitly carry data (I,T,N,F)
• Removes all improper types from the list of optional types ("improper" is any type with implicit semantics, such as "RGB color")
• Adds a version number to the packet header to support future format revisions.
• Adds an options bit-flag field to the packet header to support different low-level encodings.
• Enables encoding of data sections with big- or little-endian byte order.
• Enables encoding of data sections with 4-, 8-, 16-, or 32-byte alignment.
• Adds an optional packet checksum.
• Adds float and integer 64-bit numeric types as mandatory types to the standard.
• Adds a comprehensive collection of optional types.
• Enables encoding of structured content by using nested bundles.
• Adds lookup-table string compression for bandwidth-constrained applications, whereby address and/or typetag strings may be substituted by a numeric identifier.
• Adds a set of predefined addresses and their protocol semantics to enable discovery, enumeration and control.
• Provides a file-format using IANA MIME.
• Provides transformations to other content formats including JSON, XML,YAML.
• Provides recommended practice for ad-hoc discovery on Layer 2 (how???) and Layer 3 (Zeroconf??)
---
Andy W. Schmeder
email: andy [at] cnmat.berkeley.edu
skype: andy.schmeder
mobile: +1-510-717-6653
Programmer/Analyst II
Research Group
Center for New Music and Audio Technologies
University of California at Berkeley
http://cnmat.berkeley.edu
_______________________________________________
OSC_dev mailing list
http://lists.create.ucsb.edu/mailman/listinfo/osc_dev
Andy W. Schmeder
2011-11-04 21:44:50 UTC
Permalink
Post by Jeff Koftinoff
I have been working on a non-json ascii line based representation format of an OSC message.
/input/1/eq/1 f 1.0 f 0.5 f 1000.0
I like it, also useful for command-line tools and typing out messages by hand...
Post by Jeff Koftinoff
When I have a working system I can send more complete details!
Yes, please do!



---

Andy W. Schmeder
email: andy [at] cnmat.berkeley.edu
skype: andy.schmeder

Programmer/Analyst II
Research Group
Center for New Music and Audio Technologies
University of California at Berkeley
http://cnmat.berkeley.edu
s***@xs4all.nl
2011-11-04 23:53:11 UTC
Permalink
Post by Andy W. Schmeder
The following in an excerpt from the draft document giving my list of
design goals and proposed changes.
If your favorite issue is *not* on the list feel free to speak up. OR if
your favorite feature is scheduled to be cut, or something else doesn't
make sense.
...
The main design goals for OSC 2.0 are.
• Improve the packet format.
• Improve processing and coding efficiency.
• Resolve ambiguous syntax and semantics.
• Establish protocol for discovery, enumeration and control.
• Establish well-defined file format.
• Establish well-defined transformations to other coding schemes (JSON,
XML, YAML)
• Removes the “built-in” timestamp field from bundles.
• Resolves semantics of timestamps for use in temporal processing.
• Resolves syntax and semantics of address pattern expressions in
messages.
• Removes requirement for backtracking in processing of the address
pattern syntax.
• Resolves an oversight in the definition of the address pattern numeric
range expression so that multi-digit index selectors can be used.
• Removes the need for a framing protocol when used with a serial
transport.
• Simplifies the packet format so there is only one class of valid packet
which is the bundle.
• Resolves ambiguous semantics of nested bundles.
• Removes all type-tags that implicitly carry data (I,T,N,F)
• Removes all improper types from the list of optional types ("improper"
is any type with implicit semantics, such as "RGB color")
• Adds a version number to the packet header to support future format
revisions.
• Adds an options bit-flag field to the packet header to support
different low-level encodings.
• Enables encoding of data sections with big- or little-endian byte
order.
• Enables encoding of data sections with 4-, 8-, 16-, or 32-byte
alignment.
• Adds an optional packet checksum.
• Adds float and integer 64-bit numeric types as mandatory types to the
standard.
• Adds a comprehensive collection of optional types.
• Enables encoding of structured content by using nested bundles.
• Adds lookup-table string compression for bandwidth-constrained
applications, whereby address and/or typetag strings may be substituted
by a numeric identifier.
• Adds a set of predefined addresses and their protocol semantics to
enable discovery, enumeration and control.
• Provides a file-format using IANA MIME.
• Provides transformations to other content formats including JSON,
XML,YAML.
• Provides recommended practice for ad-hoc discovery on Layer 2 (how???)
and Layer 3 (Zeroconf??)
---
Andy W. Schmeder
email: andy [at] cnmat.berkeley.edu
skype: andy.schmeder
mobile: +1-510-717-6653
Programmer/Analyst II
Research Group
Center for New Music and Audio Technologies
University of California at Berkeley
http://cnmat.berkeley.edu
_______________________________________________
OSC_dev mailing list
http://lists.create.ucsb.edu/mailman/listinfo/osc_dev
Hi Andy,
you may recall we had some discussion on this some time ago regarding such
areas as command, enumeration and discovery.

The results of that discussion were archived here:
http://openmediacontrol.wetpaint.com/

you might be interested in looking at that, there is a summary and a
couple of sample implementations.

Regards,
Salsaman.

http://lives.sourceforge.net
Luke McQuade
2011-11-05 00:19:58 UTC
Permalink
Hello,
I'm new here and a little confused... why are the MIDI Manufacturers
Assoc. working on a new HD MIDI standard (see
http://www.midi.org/aboutus/news/hd.php), whilst OSC is still being
actively developed? Is it a business thing?

Cheers,
Luke
Andy W. Schmeder
2011-11-05 02:07:30 UTC
Permalink
Good question, I believe that the main reason people use OSC is because "addresses" (e.g. as in /mixer/fader /knob etc) are easier to understand and deal with (at a programming or system configuration level) with than having to map all the controls of an interface into the MIDI "channel number / controller number" system.

The semantics of the MIDI protocol are more or less structured around western classical musical instruments, probably the organ would be the best match--having a set of discrete pitches, a number of registers and a few extra buttons and sliders. As interfaces get increasingly esoteric the semantics of MIDI become increasingly contrary to the underlying structure. Additionally now with gesture recognition interfaces (kinect, multi-touch tables), the set of "controls" is unbounded--which means we need ways to refer to controls that may be created and destroyed on the fly, and so on. MIDI-HD will clear out the crufty old parts of MIDI that go back to 1980s-era technology, which is a very good thing, but its not going to change the underlying model significantly. Obviously thats also a good thing from a particular point of view, e.g. the business point of view, i.e., not breaking products that are on the market (somewhat cynically I'll go a step further and say that it probably *will* break those products in new and interesting ways, but thats also a good thing also from a particular point of view because then people will have to buy new stuff).

To be perfectly honest we (at CNMAT) have been, somewhere in the recess of our minds, hoping that something else (not MIDI, not OSC) would step up and fill this need, something that also meets industry needs and therefore can be moved forward by an organization (albeit rather slowly and conservatively) with the funding necessary to do the detailed work of standards development, e.g. the W3C or IETF. A few years ago I was fairly convinced that binary XML would be it, but it seems that isn't getting to full speed either and has a lot of complicated baggage attached to it.
Post by Luke McQuade
Hello,
I'm new here and a little confused... why are the MIDI Manufacturers
Assoc. working on a new HD MIDI standard (see
http://www.midi.org/aboutus/news/hd.php), whilst OSC is still being
actively developed? Is it a business thing?
---

Andy W. Schmeder
email: andy [at] cnmat.berkeley.edu
skype: andy.schmeder

Programmer/Analyst II
Research Group
Center for New Music and Audio Technologies
University of California at Berkeley
http://cnmat.berkeley.edu
Adrian Freed
2011-11-05 03:37:04 UTC
Permalink
Post by Luke McQuade
Hello,
I'm new here and a little confused... why are the MIDI Manufacturers
Assoc. working on a new HD MIDI standard (see
http://www.midi.org/aboutus/news/hd.php), whilst OSC is still being
actively developed? Is it a business thing?
You should ask the MMA why they are still working on HD MIDI after all these years....
MIDI and OSC don't do the same things and they weren't made for or by the same folk.

OSC just specifies the syntax to put messages in - what they mean is up to you, the user.

MIDI specifies everything (meaning, syntax, transport etc.) and it was designed by a closed industry group (i.e. you have to pay to be a member)
so it is useful if that industry has successfully identified or shaped your needs...

OSC will always be worked on because it is propelled by its use value not its normative value.

I am proud that we picked the core types in OSC to have long legs. You can buy a $5 800MHz ARM microcontroller with a solid FPU
these days. In that case why mess around with fixed point as I believe HD MIDI plans to?
Angelo Fraietta
2011-11-07 01:05:40 UTC
Permalink
Post by Adrian Freed
MIDI specifies everything (meaning, syntax, transport etc.) and it was
designed by a closed industry group
That is not correct. It depends on whether you are using General MIDI
(and then whether A or B). In those cases, they are specified, and
rightfully so.
Basically you want to plug your device in and it will work in every
system that defines that standard (supposed to anyway)

Outside of that it is not as as you declare - you can do whatever you
want with MIDI if you are smart enough or have the time to make it work.
Adrian Freed
2011-11-07 03:11:18 UTC
Permalink
Post by Angelo Fraietta
Outside of that it is not as as you declare - you can do whatever you
want with MIDI if you are smart enough or have the time to make it work.
Yes. I am not smart enough and too busy.
Angelo Fraietta
2011-11-07 03:59:04 UTC
Permalink
Post by Adrian Freed
Post by Angelo Fraietta
Outside of that it is not as as you declare - you can do whatever you
want with MIDI if you are smart enough or have the time to make it work.
Yes. I am not smart enough and too busy.
I actually think the level of "smartness" to use OSC compared to MIDI is
much higher. Too busy is another issue.
Mattijs Kneppers
2011-11-07 12:48:06 UTC
Permalink
Post by Angelo Fraietta
I actually think the level of "smartness" to use OSC compared to MIDI is
much higher.
Although by no means scientific, this is an interesting discussion
that I continue to have with colleagues. Is OSC 'simpler' to use than
MIDI?

This mostly boils down to a discussion about the standardized address
space. Does this makes a protocol easier to use for the average user?
This clearly depends on who the average user is. Do you like the lego
where every piece can only be used in one way (http://bit.ly/sE71R5)
or the lego where you use your imagination to decide which piece does
what (http://bit.ly/t4bSzA)?

I personally believe that the average users of both MIDI and OSC are
creative people that don't like thinking inside the box.

Not having a predetermined address mapping does require the capacity
to invent a naming convention that best fits a certain project, but I
think that the average user is fully capable of that. Note that this
is a different 'smartness' than the knowledge needed to understand the
underlying technical protocols. But the latter is exactly what we, the
developers, should be shielding them from.
--
arttech.nl | oscseq.com | smadsteck.nl
Andy W. Schmeder
2011-11-07 18:58:50 UTC
Permalink
Post by Mattijs Kneppers
Post by Angelo Fraietta
I actually think the level of "smartness" to use OSC compared to MIDI is
much higher.
Although by no means scientific, this is an interesting discussion
that I continue to have with colleagues. Is OSC 'simpler' to use than
MIDI?
This mostly boils down to a discussion about the standardized address
space. Does this makes a protocol easier to use for the average user?
This isn't the question that any user actually asks.

The reason that any tool is used is that it provides value in some context. The value is offset by the cost of the tool. I can cut paper with scissors ($1), or with a laser cutter ($10000). Nobody asks which one is "more simple", they ask "what can I do with this tool?", and then weigh the costs against that answer.

The value of General MIDI is that you can send messages to MIDI synthesizers, or things pretending to be MIDI synthesizers. The cost is understanding General MIDI, which is not simple by the way (its an 86 page long specification plus numerous supporting documents from the MMA, all distributed for a fee, by the way, which is another cost). Of course you can use MIDI to send non-GM messages to anything, but that loses immediately the primarily value proposition of MIDI, which is compatibility with other things MIDI.

The value of using OSC is that you can send messages to other things that use OSC. The cost is understanding OSC, as well as possibly having to teach the other thing to understand the content of the messages that you want to send.

OSC was developed at a time when there was a movement in computer music towards ways of generating and controlling sound that are not easily described using the "note, pitch, velocity" abstraction (for example). These things were invented from scratch, they were not MIDI synthesizers, and it was not easy to make them pretend to be MIDI synthesizers, therefore MIDI provided no value to the context. That practice is still there and evolving in even more interesting and unusual directions, none of which can be anticipated.

At the same time there is an explosion of interest in beat/tracker/looper synthesizers (like Live etc), which more or less goes back to what MIDI was designed for, so MIDI is still relevant to a particular market segment, but that is just a different segment. OSC isn't pro or against that segment, its just ... different.


---

Andy W. Schmeder
email: andy [at] cnmat.berkeley.edu
skype: andy.schmeder

Programmer/Analyst II
Research Group
Center for New Music and Audio Technologies
University of California at Berkeley
http://cnmat.berkeley.edu
Jeff Koftinoff
2011-11-07 03:19:48 UTC
Permalink
I think the point is that if you want to create your own MIDI Sysex message, you must be a paid member of the MMA and purchase your own sysex manufacturer ID.

If the existing message set of channel messages etc is fine for your system then you are ok.

Regards,
Jeff
Post by Angelo Fraietta
Post by Adrian Freed
MIDI specifies everything (meaning, syntax, transport etc.) and it was
designed by a closed industry group
That is not correct. It depends on whether you are using General MIDI
(and then whether A or B). In those cases, they are specified, and
rightfully so.
Basically you want to plug your device in and it will work in every
system that defines that standard (supposed to anyway)
Outside of that it is not as as you declare - you can do whatever you
want with MIDI if you are smart enough or have the time to make it work.
_______________________________________________
OSC_dev mailing list
http://lists.create.ucsb.edu/mailman/listinfo/osc_dev
Angelo Fraietta
2011-11-07 03:57:23 UTC
Permalink
Post by Jeff Koftinoff
I think the point is that if you want to create your own MIDI Sysex message, you must be a paid member of the MMA and purchase your own sysex manufacturer ID.
If the existing message set of channel messages etc is fine for your system then you are ok.
General MIDI is only a subset of MIDI.
Outside of General MIDI you are free, and correctly so and without it
even being a hack, to use the messages however you want - and I am not
talking about sysex.
As for sysex, you are free to use an academic Sysex ID if you want to
use sysex without even having to register it.
Jeff Koftinoff
2011-11-07 04:11:47 UTC
Permalink
General MIDI defines a selection of standard patches and some control behaviour.

MIDI itself defines all of the standard message types.

You can see them in my open source C++ midi library: https://github.com/jdkoftinoff/jdksmidi
specifically,

byte codes here: https://github.com/jdkoftinoff/jdksmidi/blob/master/include/jdksmidi/midi.h
messages here: https://github.com/jdkoftinoff/jdksmidi/blob/master/include/jdksmidi/msg.h
and sysex here: https://github.com/jdkoftinoff/jdksmidi/blob/master/include/jdksmidi/sysex.h
there is also midi time code and midi show control.

Yes, you can re-purpose MIDI NOTE ON, MIDI Control change, etc, if you can be limited to 7 bit values (or 14 if you use the hi/low controls)

The complete list of midi sysex manufacturer id's is here:
http://www.midi.org/techspecs/manid.php
and at first glance I can't see any academic id's defined.

Are you meaning that you can use the MIDI port as a serial port and send arbitrary non-standard messages over it? Of course you can, but then you can't ever ship a reasonable product that does that, for once you do so it is technically no longer MIDI and would possibly interfere with existing software and hardware.

One of the things that the MMA does is make sure that no device can possibly interfere with another device using the standard MIDI protocols. This is one of the reasons why they have to make a big switch to HD-MIDI instead of making incremental improvements that possibly might damage backwards compatibility.

Regards,
Jeff
Post by Angelo Fraietta
Post by Jeff Koftinoff
I think the point is that if you want to create your own MIDI Sysex message, you must be a paid member of the MMA and purchase your own sysex manufacturer ID.
If the existing message set of channel messages etc is fine for your system then you are ok.
General MIDI is only a subset of MIDI.
Outside of General MIDI you are free, and correctly so and without it
even being a hack, to use the messages however you want - and I am not
talking about sysex.
As for sysex, you are free to use an academic Sysex ID if you want to
use sysex without even having to register it.
_______________________________________________
OSC_dev mailing list
http://lists.create.ucsb.edu/mailman/listinfo/osc_dev
Angelo Fraietta
2011-11-07 04:44:26 UTC
Permalink
Post by Jeff Koftinoff
General MIDI defines a selection of standard patches and some control behaviour.
Are you meaning that you can use the MIDI port as a serial port and send arbitrary non-standard messages over it? Of course you can, but then you can't ever ship a reasonable product that does that, for once you do so it is technically no longer MIDI and would possibly interfere with existing software and hardware.
No, you can transmit MIDI over another port - not a MIDI port.
I addressed this in an OSC paper at NIME in 2008
Andy W. Schmeder
2011-11-07 07:28:22 UTC
Permalink
Post by Angelo Fraietta
Post by Jeff Koftinoff
General MIDI defines a selection of standard patches and some control behaviour.
Are you meaning that you can use the MIDI port as a serial port and send arbitrary non-standard messages over it? Of course you can, but then you can't ever ship a reasonable product that does that, for once you do so it is technically no longer MIDI and would possibly interfere with existing software and hardware.
No, you can transmit MIDI over another port - not a MIDI port.
I addressed this in an OSC paper at NIME in 2008
We are familiar with your paper, and the alternative transports of MIDI are well known. RTP-MIDI was invented at UC Berkeley.

I appreciate a critical analysis, and your paper has some good suggestions, many of which have been expressed by other OSC users, and are also reflected in my own notes for future work. However your paper is also unnecessarily confrontational and misleading, frequently presenting your accusations, personal opinions and interpretations as factual claims.

Your paper does not provide an adequate comparative framework to weigh OSC versus MIDI. Statements such as "I actually think the level of "smartness" to use OSC compared to MIDI is much higher" are meaningless, as you point out in the paper there isn't sufficient common ground to make a comparison at all, yet you go on to claim that "MIDI wins hands down" which is provocative but not a critical statement.

If you have further topics to discuss related to OSC including but not limited to problems or desired features hindering your use of OSC, please feel free; but this continued confrontational tone and rehashed discussion of OSC versus MIDI on the basis of some undefinable metric is inappropriate and not helpful.

---

Andy W. Schmeder
email: andy [at] cnmat.berkeley.edu
skype: andy.schmeder

Programmer/Analyst II
Research Group
Center for New Music and Audio Technologies
University of California at Berkeley
http://cnmat.berkeley.edu
Angelo Fraietta
2011-11-07 21:53:50 UTC
Permalink
Post by Andy W. Schmeder
We are familiar with your paper, and the alternative transports of MIDI are well known. RTP-MIDI was invented at UC Berkeley.
I appreciate a critical analysis, and your paper has some good suggestions, many of which have been expressed by other OSC users, and are also reflected in my own notes for future work. However your paper is also unnecessarily confrontational and misleading, frequently presenting your accusations, personal opinions and interpretations as factual claims.
I did not receive a single response, either from the NIME referees or
anyone else since, that stated that my claims were wrong. I will admit
that the tone of the paper could be seen as confrontational, however, as
scientists, we need to be critical of claims made. For me it was like
the story of the "Emperor's new clothes", when I analysed the claims
made that were made, many were actually false.

My claims are based on facts, with the opinions formed based on those
facts. The purpose of the paper was to dispel many of the myths
surrounding OSC.
Post by Andy W. Schmeder
Your paper does not provide an adequate comparative framework to weigh OSC versus MIDI.
The paper was not a complete comparison between MIDI and OSC, but the
comparisons that the OSC community (all with references) made between
OSC and MIDI - it was a critical paper addressing the beliefs many have
about OSC. I have found that the OSC community are bound with religious
furvour.
Post by Andy W. Schmeder
Statements such as "I actually think the level of "smartness" to use OSC compared to MIDI is much higher"
I just worked on a project where the output was OSC - that was the best
protocol.

Here is a question I received just this month

Hi Angelo

Hope you are well. The students at UTS Extreme Programming have done
some great work with analysing Heart Rate Variability Spectrum -
but they cant figure out how to transmit the right OSC format for
Max - would you have any tips or do you know if any papers they
could refer to: they need to send an array as part of the OSC message.

These people are university students and the person asking me has a
Doctorate in Creative Arts. I just don't get these sorts of questions
about MIDI. I am not saying that the person asking is not smart, I just
think in general, MIDI is less complicated in general than OSC. But MIDI
does have its shortcomings, which I also acknowledged.
Post by Andy W. Schmeder
are meaningless, as you point out in the paper there isn't sufficient common ground to make a comparison at all, yet you go on to claim that "MIDI wins hands down" which is provocative but not a critical statement.
How about you place the statement into context - was it a comparison
between MIDI in total compared to OSC in total, which is what you are
suggesting?

No! - let us look at the quote in context - remember, it is the OSC
community that makes an earlier claim about OSC speed vs MIDI speed.

"3.1 OSC is Fast
There is a belief in the NIME community that OSC is a fast
communications protocol .... It is, however, misleading to compare the
speed of OSC to
MIDI based on the data transmission rate because OSC does not have a
data transmission rate. ...If one was to measure the number of machine
instructions required to parse a typical MIDI message with that of a
typical OSC message,MIDI would win hands down."

I think that is factual and can be easily proved mathematically.

Where is the opinion? I only made a claim that can be backed up
mathematically.
Post by Andy W. Schmeder
If you have further topics to discuss related to OSC including but not limited to problems or desired features hindering your use of OSC, please feel free; but this continued confrontational tone and rehashed discussion of OSC versus MIDI on the basis of some undefinable metric is inappropriate and not helpful.
The question raised was why is MIDI still being pursued, and I am
hearing the same MIDI / OSC mantras. I made no claim in this thread
against OSC, however, it appears that when anyone makes a claim that a
belief about MIDI is contradicted, that they are making an attack
against OSC. I saw Ross brought up the four byte padding issue, which I
raised in my paper as an issue.

If people are not informed, how are they going to contribute to making a
better standard. I think the community should spend less time bagging
MIDI and get on with the job.
Post by Andy W. Schmeder
---
Andy W. Schmeder
email: andy [at] cnmat.berkeley.edu
skype: andy.schmeder
Programmer/Analyst II
Research Group
Center for New Music and Audio Technologies
University of California at Berkeley
http://cnmat.berkeley.edu
_______________________________________________
OSC_dev mailing list
http://lists.create.ucsb.edu/mailman/listinfo/osc_dev
Andy W. Schmeder
2011-11-08 02:50:31 UTC
Permalink
These people are university students and the person asking me has a Doctorate in Creative Arts. I just don't get these sorts of questions about MIDI. I am not saying that the person asking is not smart, I just think in general, MIDI is less complicated in general than OSC.
OK, so how do you send a heart rate variability spectrum in MIDI in a manner that is "less complicated in general" than sending it in OSC?
No! - let us look at the quote in context - remember, it is the OSC community that makes an earlier claim about OSC speed vs MIDI speed.
I think that is factual and can be easily proved mathematically.
Yes its true that OSC is not an optimal encoding in the Shannon-entropy sense or whatever, neither is MIDI, if you can invent such a coding then go collect your Turing prize.

We are well aware of the impact of certain design decisions on the information density of the packet format. The four byte alignment rule was chosen so that the packet format could be parsed at the word level. And the use of ASCII strings containing text are known to be low-density, English text is only 2-3 bits of information per byte, due to high correlation in the character sequences of words.

But low information density didn't stop HTML from world wide adoption. I don't see anyone advocating to replace HTML with MIDI, or perhaps PDF, or JPEG, even though the economic cost of that low information density coding is surely in the billions per year and growing. In terms of the overall scalability of a protocol there are many more issues than just coding space efficiency, like maintenance, documentation, debugging, compatibility, connectivity, etc.

Really its just not possible to compare OSC to MIDI, they are not the same thing, they don't serve the same purpose. Its like comparing XML to HTML, it makes no sense. One of those is a presentation layer protocol, the other is an application layer protocol. Shall we accuse the XML developers of being "anti-HTML"? Shall we complain that the "<table>" tag isn't included in XML--because my god, how are we going to draw tables without it? If people keep asking me how to format a table in XML, but they don't ask how to do it in HTML, does that mean that HTML is "less complicated in general" than XML? If someone comes up with a new piece of data that they want to store and transmit in a meaningful way between programs, say its that heart rate variability spectrum, shall we recommend that they represent it in an HTML table, or to use XML?
The question raised was why is MIDI still being pursued, and I am hearing the same MIDI / OSC mantras. I made no claim in this thread against OSC, however, it appears that when anyone makes a claim that a belief about MIDI is contradicted, that they are making an attack against OSC.
"MIDI and OSC don't do the same things and they weren't made for or by the same folk." (Adrian)

Thats the only mantra that I believe in. People believe and say a wide variety of things about OSC as well as MIDI, as well as their hybrid cars, many of those statements are opinions and hearsay and fantasies. Of course there is confirmation bias and people are defensive about their choices, thats human psychology, which you are free to attempt to correct but its not the fault of the OSC developers in general.
If people are not informed, how are they going to contribute to making a better standard. I think the community should spend less time bagging MIDI and get on with the job.
I agree--so why are we still talking about MIDI?


---

Andy W. Schmeder
email: andy [at] cnmat.berkeley.edu
skype: andy.schmeder

Programmer/Analyst II
Research Group
Center for New Music and Audio Technologies
University of California at Berkeley
http://cnmat.berkeley.edu
Angelo Fraietta
2011-11-08 03:29:44 UTC
Permalink
Post by Andy W. Schmeder
These people are university students and the person asking me has a Doctorate in Creative Arts. I just don't get these sorts of questions about MIDI. I am not saying that the person asking is not smart, I just think in general, MIDI is less complicated in general than OSC.
OK, so how do you send a heart rate variability spectrum in MIDI in a manner that is "less complicated in general" than sending it in OSC?
No - the point was I don't get questions like this about MIDI. The
questions was simply how to send the data to Max.
Post by Andy W. Schmeder
No! - let us look at the quote in context - remember, it is the OSC community that makes an earlier claim about OSC speed vs MIDI speed.
I think that is factual and can be easily proved mathematically.
Yes its true that OSC is not an optimal encoding in the Shannon-entropy sense or whatever, neither is MIDI, if you can invent such a coding then go collect your Turing prize.
The point was that you quoted me out of context - very poor academic form.
Post by Andy W. Schmeder
Really its just not possible to compare OSC to MIDI, they are not the same thing, they don't serve the same purpose. Its like comparing XML to HTML, it makes no sense. One of those is a presentation layer protocol, the other is an application layer protocol. Shall we accuse the XML developers of being "anti-HTML"? Shall we complain that the "<table>" tag isn't included in XML--because my god, how are we going to draw tables without it? If people keep asking me how to format a table in XML, but they don't ask how to do it in HTML, does that mean that HTML is "less complicated in general" than XML? If someone comes up with a new piece of data that they want to store and transmit in a meaningful way between programs, say its that heart rate variability spectrum, shall we recommend that they represent it in an HTML table, or to use XML?
The paper only addressed the issues that the OSC community had claimed
in the comparison - I was not the one who started on the comparisons
between MIDI and OSC - it was the OSC community. I simply looked at what
was being claimed and showed that the comparisons made by the OSC
community were not valid. I would like to see where in my paper I am
incorrect.
Post by Andy W. Schmeder
The question raised was why is MIDI still being pursued, and I am hearing the same MIDI / OSC mantras. I made no claim in this thread against OSC, however, it appears that when anyone makes a claim that a belief about MIDI is contradicted, that they are making an attack against OSC.
"MIDI and OSC don't do the same things and they weren't made for or by the same folk." (Adrian)
Thats the only mantra that I believe in. People believe and say a wide variety of things about OSC as well as MIDI, as well as their hybrid cars, many of those statements are opinions and hearsay and fantasies. Of course there is confirmation bias and people are defensive about their choices, thats human psychology, which you are free to attempt to correct but its not the fault of the OSC developers in general.
I agree. That was the whole point of the peer reviewed paper - to
separate the fact from the fantasy.
Post by Andy W. Schmeder
If people are not informed, how are they going to contribute to making a better standard. I think the community should spend less time bagging MIDI and get on with the job.
I agree--so why are we still talking about MIDI?
The question was posed about MIDI. Maybe when someone mentions MIDI in
an OSC forum, the moderators can say "OFF TOPIC !" That is what they do
on the computer language forums when someone discusses multi-threading
on a C++ forum.

I am not bagging one against the other - they each have their place.
However, if someone asks a question about MIDI, I am only to happy to
answer it as best I can.
Andy W. Schmeder
2011-11-08 09:04:49 UTC
Permalink
I would like to see where in my paper I am incorrect.
"OSC cannot deliver on its promises as a real-time communication protocol for constrained embedded systems."

First of all OSC never promised anything of the sort, in fact to the contrary as far as I can recall the developers have generally cited that it is intended for use with systems having at least a 32-bit CPU with a FPU and a 10Mbit ethernet port or faster.

Second, Moores' law has moved along nicely for embedded systems. Today's $5 microprocessor has a 32-bit RISC core and runs about 100Mhz. Its enough crunch to easily fill a 10Mbit link with OSC messages. At CNMAT we have implemented OSC on these types of microprocessors for a few different architectures in the past several years. It works just fine. People even push around XML on these sorts of systems, they are plenty capable. Of course there are some commercial applications where every cent counts and one needs to use the $1 8-bit microprocessor, so maybe its not ready for those yet, but thats not going to keep me up at night.

More generally, your paper doesn't examine the context of how people were experiencing OSC at the time when they made the statements that you quote.

The first implementations of OSC were running at CNMAT on SGI IRIX machines, at the time USB-MIDI etc didn't exist yet. (and by the way the early days of USB-MIDI were downright horrible, it was actually *slower* and had worse latency and jitter than with 31k serial MIDI, and it is still arguably worse in temporal characteristics). It wasn't until RTP-MIDI in 2006 that there was a viable high speed network transport for MIDI available with good temporal behavior.

So, back then all the MIDI gear was still connected via MIDI cable and 31k serial links, and with OSC we could suddenly have new and far more interesting software synths, new controllers that didn't look like keyboards, and connect it all together with ethernet hubs instead of running MIDI cable around. OSC on UDP *was* faster, way faster, and the human-readable format was enabling rapid and reliable build out of complex systems that went way beyond anything that the music industry had ever created. Having spent the previous decade struggling to do innovative things with MIDI, the computer music people were pretty stoked about OSC and said a lot of glowing things, some of which are not exactly scientific and may have over-inflated the true nature of what OSC is.

Yes some of those things said end up being somewhat misleading especially to newcomers to the field, but one can see how a paper that basically attacks people for being happy and satisfied with a technology by twisting their praise into "myths" and "failures" isn't going to generate a lot of good will.

When people said that OSC was fast and efficient, they were not measuring it in bits per second or instruction cycles per byte, they were talking about their human-scale experience with using OSC, which is really what counts in the end, and those experiences were real, not a fantasy.

That said your paper has plenty of good points and as I said previously in this thread you are not alone in those opinions and they have been noted. I can't promise any particular action towards addressing them but it is possible that the improvements you want will come about eventually.
Post by Andy W. Schmeder
"MIDI and OSC don't do the same things and they weren't made for or by the same folk." (Adrian)
I agree. That was the whole point of the peer reviewed paper - to
separate the fact from the fantasy.
OK, great.
Post by Andy W. Schmeder
If people are not informed, how are they going to contribute to making a better standard. I think the community should spend less time bagging MIDI and get on with the job.
I agree--so why are we still talking about MIDI?
The question was posed about MIDI. Maybe when someone mentions MIDI in
an OSC forum, the moderators can say "OFF TOPIC !" That is what they do
on the computer language forums when someone discusses multi-threading
on a C++ forum.
I'm not opposed to those questions if its not an unreasonable digression.


---

Andy W. Schmeder
email: andy [at] cnmat.berkeley.edu
skype: andy.schmeder

Programmer/Analyst II
Research Group
Center for New Music and Audio Technologies
University of California at Berkeley
http://cnmat.berkeley.edu
Angelo Fraietta
2011-11-08 20:07:43 UTC
Permalink
Post by Andy W. Schmeder
I would like to see where in my paper I am incorrect.
"OSC cannot deliver on its promises as a real-time communication protocol for constrained embedded systems."
First of all OSC never promised anything of the sort, in fact to the contrary as far as I can recall the developers have generally cited that it is intended for use with systems having at least a 32-bit CPU with a FPU and a 10Mbit ethernet port or faster.
This was as I can understand the general consensus at the time. The
reviewers did not seem to disagree.
Post by Andy W. Schmeder
More generally, your paper doesn't examine the context of how people were experiencing OSC at the time when they made the statements that you quote.
At the outset of the paper I describe what some of the innovative and
valuable features of OSC were.

I don't know how many papers you have had accepted at NIME, but normally
six pages is a maximum - the thing you mentioned would have just
cluttered up the paper and it would have detracted from what the paper
was about. The purpose of my paper was to dispel some of the myths
surrounding OSC. I even made suggestions as to how it could be made more
efficient.

I still develop with OSC where it is the best option. Where I need a
fast and efficient communication between a low bandwidth controller, I
don't. It is about using the right tool for the Job. When I need to
communicate between one process and another in MAX, I do. I suppose I
may have had a harsh tone in the paper, however, I am not trying to make
anybody feel bad about being happy with OSC - on the contrary, if OSC
does the best job for you, great. I even use it for some projects. My
instruments were the first OSC to control voltage converters. However,
as stated in the paper, there were currently no critical papers on OSC
at the time. I received some hate mail for my paper, but I also received
some grateful responses for bringing to light some of the myths. I am
sorry if I have made anybody feel bad about OSC - that was not my intention.

So in short, OSC is great. It has some great features. Continue using
it. However, that does not mean that I have to believe that it is the
be-all and end-all. On real-time embedded systems, specific device
drivers are king - it is not for everyone, but it works more
efficiently. OSC is great for those who don't have the time, skill, or
inclination to do that - and that is OK also. That does not maker them
any less an artist.
Andy W. Schmeder
2011-11-08 21:06:39 UTC
Permalink
Post by Angelo Fraietta
This was as I can understand the general consensus at the time. The
reviewers did not seem to disagree.
Your words are your own responsibility, not the reviewers.

If you wanted feedback you could have sent a draft to the people you quoted, and asked them if your quotations were accurate representations of their intended meaning.
Post by Angelo Fraietta
I don't know how many papers you have had accepted at NIME, but normally
six pages is a maximum - the thing you mentioned would have just
cluttered up the paper and it would have detracted from what the paper
was about.
The paper would have been shorter if you just focused on your experience, and no one would have cause to complain about it.

YOU had an experience with OSC for a particular constrained microcontroller project, and you experienced that OSC was not "fast and efficient".

OTHER people had different experiences in different contexts and they said it WAS "fast and efficient".

Instead of asking why, or just pointing out that your experience doesn't match up, you go about accusing everyone of living in a fantasy, of perpetuating a myth, of being liars. You really don't see a problem with that? Do you think that it was necessary to get attention?

You can't justify this by pointing to other merits of the paper, and frankly I'm tired of this attempt to deflect the argument. Its just a mistake, "sorry" is enough to fix it.

---

Andy W. Schmeder
email: andy [at] cnmat.berkeley.edu
skype: andy.schmeder

Programmer/Analyst II
Research Group
Center for New Music and Audio Technologies
University of California at Berkeley
http://cnmat.berkeley.edu
Adrian Freed
2011-11-08 21:57:40 UTC
Permalink
While this is an interesting debate I would like to turn people's attention to a more important
and challenging problem that both the MMA and OSC users have to address: Does anybody actually care
about the "next release" of these things?

It is hard for me to have clear perspective on this for OSC beyond our best indications as to where
the community might want to take OSC 2.0 reflected in the NIME paper.

For MIDI HD we have an interesting indication here:

http://www.midi.org/aboutus/news/hd.php


Note that the MMA appears to be doing the same thing we the OSC group is w.r.t to AVB
http://www.midi.org/aboutus/workgroups.php:

"Transport Layer (TLWG)

The TLWG makes recommendations and produces specifications for alternate transports for MIDI protocol (other than the transport defined in the current MIDI 1.0 Detailed Specification). TLWG objectives are to insure continued high-quality performance, maximum interoperability, and consumer satisfaction with MIDI. The group is collaborating with the HDWG on IEEE-AVB at this time."
Jeff Koftinoff
2011-11-08 23:05:09 UTC
Permalink
I care!!!

I want to do OSC v1 and v2 over a multicast time sensitive avb control stream!

;-)
Post by Adrian Freed
While this is an interesting debate I would like to turn people's attention to a more important
and challenging problem that both the MMA and OSC users have to address: Does anybody actually care
about the "next release" of these things?
It is hard for me to have clear perspective on this for OSC beyond our best indications as to where
the community might want to take OSC 2.0 reflected in the NIME paper.
http://www.midi.org/aboutus/news/hd.php
Note that the MMA appears to be doing the same thing we the OSC group is w.r.t to AVB
"Transport Layer (TLWG)
The TLWG makes recommendations and produces specifications for alternate transports for MIDI protocol (other than the transport defined in the current MIDI 1.0 Detailed Specification). TLWG objectives are to insure continued high-quality performance, maximum interoperability, and consumer satisfaction with MIDI. The group is collaborating with the HDWG on IEEE-AVB at this time."
_______________________________________________
OSC_dev mailing list
http://lists.create.ucsb.edu/mailman/listinfo/osc_dev
Angelo Fraietta
2011-11-09 02:56:42 UTC
Permalink
Post by Andy W. Schmeder
Post by Angelo Fraietta
This was as I can understand the general consensus at the time. The
reviewers did not seem to disagree.
Your words are your own responsibility, not the reviewers.
All because I said in the abstract that OSC promised something? In my
experience in both NIME and other computer music communities, that was
definitely the understanding I had. Regardless, that in itself does not
effect any of the scientific claims in the paper.
Post by Andy W. Schmeder
If you wanted feedback you could have sent a draft to the people you quoted, and asked them if your quotations were accurate representations of their intended meaning.
The quotes were from published papers, page numbers and everything -
your free to examine them. I don't believe anything was taken out of
context.
Post by Andy W. Schmeder
Post by Angelo Fraietta
I don't know how many papers you have had accepted at NIME, but normally
six pages is a maximum - the thing you mentioned would have just
cluttered up the paper and it would have detracted from what the paper
was about.
The paper would have been shorter if you just focused on your experience, and no one would have cause to complain about it.
Is that what your institution wants? Not interested in systematic and
objective research - just experience. I am sorry - that is just not
scientific and I don't think it makes for quality art.
Post by Andy W. Schmeder
YOU had an experience with OSC for a particular constrained microcontroller project, and you experienced that OSC was not "fast and efficient".
Where did I say this in the paper?
Post by Andy W. Schmeder
OTHER people had different experiences in different contexts and they said it WAS "fast and efficient".
I explained this is the paper about what constitutes efficient.

"Efficiency is a relative term—what is deemed efficient today may be
deemed inefficient tomorrow when newer technologies or algorithms are
developed. In order to evaluate whether OSC is efficient, one does not
necessarily need to compare it in its entirety to a preexisting system,
but rather, to demonstrate how the resources are being wasted."
Post by Andy W. Schmeder
Instead of asking why, or just pointing out that your experience doesn't match up, you go about accusing everyone of living in a fantasy, of perpetuating a myth, of being liars. You really don't see a problem with that? Do you think that it was necessary to get attention?
It was not about experience - it is about objective science. I don't
even remember mentioning my own experience - that was irrelevant. What
was relevant was people what people were saying that was outright
incorrect. Furthermore, these same false statements were being bandied
about unchallenged in academia.
Post by Andy W. Schmeder
You can't justify this by pointing to other merits of the paper, and frankly I'm tired of this attempt to deflect the argument. Its just a mistake, "sorry" is enough to fix it.
I am sorry I have offended you. I am sorry if I have made you feel bad.
Andy W. Schmeder
2011-11-09 06:12:02 UTC
Permalink
Post by Angelo Fraietta
Post by Andy W. Schmeder
You can't justify this by pointing to other merits of the paper, and frankly I'm tired of this attempt to deflect the argument. Its just a mistake, "sorry" is enough to fix it.
I am sorry I have offended you. I am sorry if I have made you feel bad.
Don't take it too seriously. Adrian Freed met you (he was chair of the session where you presented that paper) and he says you are a friendly person, I believe it.

---

Andy W. Schmeder
email: andy [at] cnmat.berkeley.edu
skype: andy.schmeder

Programmer/Analyst II
Research Group
Center for New Music and Audio Technologies
University of California at Berkeley
http://cnmat.berkeley.edu
Angelo Fraietta
2011-11-09 07:50:41 UTC
Permalink
Post by Andy W. Schmeder
Post by Angelo Fraietta
Post by Andy W. Schmeder
You can't justify this by pointing to other merits of the paper, and frankly I'm tired of this attempt to deflect the argument. Its just a mistake, "sorry" is enough to fix it.
I am sorry I have offended you. I am sorry if I have made you feel bad.
Don't take it too seriously. Adrian Freed met you (he was chair of the session where you presented that paper) and he says you are a friendly person, I believe it.
He also answered a lot of questions I asked before I submitted the paper
and I asked him what I could quote him as saying. I acknowledged him and
the OSC group in the acknowledgments section in the paper.

It was never my intention to make anyone feel bad. I think OSC has some
great features and has filled a lot of gaps that were there beforehand.
I would like to see it continue to progress.

Andy W. Schmeder
2011-11-05 01:28:29 UTC
Permalink
Thanks for the reminder I will review that material again.
Post by s***@xs4all.nl
http://openmediacontrol.wetpaint.com/
---

Andy W. Schmeder
email: andy [at] cnmat.berkeley.edu
skype: andy.schmeder

Programmer/Analyst II
Research Group
Center for New Music and Audio Technologies
University of California at Berkeley
http://cnmat.berkeley.edu
Gaspard Bucher
2011-11-05 10:43:10 UTC
Permalink
Hi there !

Some of you may remember the "oscit" protocol that I was building for Lubyk
(http://lubyk.org, previously known as Rubyk).

By adding "[" and "]" as type tags and having a H for "Hash", I was able to
represent about any data structure as a nested dictionary:

H[s[sfsfsf]ss] ===> {"foo": {"a":4.0 "b":5.2" c:"2.1"} "bar":"baz"}

Clearly, this was the most useful structure of the whole system (which
comprised network and method discovery, partial updates, etc).

Anyway, as I switched from C++ to Lua to handle the patch network and
scheduling, I replaced this custom OSC with ZeroMQ + Msgpack. The only
types I use are:

String
Float
Dictionary
True/False
Blob (will be used for binary data such as OpenCV matrix)

I use the first string to encode the URL (without any pattern matching),
the rest being arguments. All in all, I just use a few "meta" urls:

-- Get content at path (used to download assets)
lubyk.get_url = '/lk/get'

-- Get all patch information at once (returns a dictionary)
lubyk.dump_url = '/lk/dump'

-- (Partial) update of a patch
lubyk.update_url = '/lk/update'

-- Quit process
lubyk.quit_url = '/lk/quit'

Other methods are simply "/path/to/node", ... arguments

There is no type on methods (we can send anything).

This system is very simple, very flexible and requires minimal work on the
user (nobody should have to ask himself "should I use an int or a float?",
"should I use a 32bit float or 64bit ?", "should I use a char or a
string?").

As for system discovery, the full dump provides a consistent structure with
identified places for "nodes", "inlets", "outlets" and "link" definitions
(inside the patch and outside) so there is no need for a "query" language:
an interface which needs to know something just queries the full (or
partial) dump.

There is two different means to change the state of a running system:

A. Program (modify scripts, add objects, change links, etc)
B. Data (send some data to an inlet)

A is done by sending a partial dictionary to "/lk/update". Such a change
can be {"a": {"nodes": {"counter": {"hue": 0.4}}}}.

B is done by targeting the inlet's url directly with the data as argument.

Dump example (represented with yaml):

y: 38
x: 110
name: a
nodes:
counter:
hue: 0
x: 30
name: counter
code: "..." # Lua code
outlets:
- info: Increasing numbers [float].
name: count
links:
/b/div/in/input: true # Setting this to false removes the link
/c/v/in/win: true
inlets:
- name: bang
info: Send next number on [bang].
y: 75
metro:
hue: 0
x: 30
name: metro
code: "..." # Lua code
outlets:
- info: Sends a bang on every beat [true].

The corresponding process for this dump is process "a" in this picture:
image384_std.png <Loading Image...

Cheers,

Gaspard
Post by Adrian Freed
Post by Luke McQuade
Hello,
I'm new here and a little confused... why are the MIDI Manufacturers
Assoc. working on a new HD MIDI standard (see
http://www.midi.org/aboutus/news/hd.php), whilst OSC is still being
actively developed? Is it a business thing?
You should ask the MMA why they are still working on HD MIDI after all
these years....
MIDI and OSC don't do the same things and they weren't made for or by the
same folk.
OSC just specifies the syntax to put messages in - what they mean is up to
you, the user.
MIDI specifies everything (meaning, syntax, transport etc.) and it was
designed by a closed industry group (i.e. you have to pay to be a member)
so it is useful if that industry has successfully identified or shaped
your needs...
OSC will always be worked on because it is propelled by its use value not
its normative value.
I am proud that we picked the core types in OSC to have long legs. You can
buy a $5 800MHz ARM microcontroller with a solid FPU
these days. In that case why mess around with fixed point as I believe HD
MIDI plans to?
_______________________________________________
OSC_dev mailing list
http://lists.create.ucsb.edu/mailman/listinfo/osc_dev
--
Gaspard
Mattijs Kneppers
2011-11-06 13:22:18 UTC
Permalink
Hello Andy,

I had the pleasure of meeting John MacCallum at the Cycling '74 Expo
in New York a few weeks ago and apart from exchanging ideas about
object orientation in Max we chatted a bit about OSC and its future.

He mentioned that you folks don't have structural funding for the
development of OSC. It still seems strange to me that an important
development like this that is fundamentally changing the music and
show industry is not supported by a structural means of existence, but
it makes me even more happy to see that you are still actively working
on improving the specification.

I currently feel especially attached to your developments since I'm
working on an OSC sequencer (oscseq.com) that I hope will one day
evolve into or at least pave the way for a compositional tool that
replaces the time lines out there that are all based on the MIDI way
of thinking, which I find a particularly limited, harshly quantized
and pre-structured mindset.

As a reaction to your question whether I miss anything; I don't as far
as I can see at a first glance, but I am highly interested in the
specifics of your ideas about time tags in the new specification. Any
serious OSC recording or timeline application will rely heavily on a
correct implementation of time tag handling in OSC controllers and
destinations, so I'm hoping that the new specs will encourage OSC
developers to start using time tags in their implementations.

Keen on any updates you may have!

Best,
Mattijs

On Fri, Nov 4, 2011 at 9:26 PM, Andy W. Schmeder
Post by Andy W. Schmeder
The following in an excerpt from the draft document giving my list of design goals and proposed changes.
If your favorite issue is *not* on the list feel free to speak up.  OR if your favorite feature is scheduled to be cut, or something else doesn't make sense.
...
The main design goals for OSC 2.0 are.
       • Improve the packet format.
       • Improve processing and coding efficiency.
       • Resolve ambiguous syntax and semantics.
       • Establish protocol for discovery, enumeration and control.
       • Establish well-defined file format.
       • Establish well-defined transformations to other coding schemes (JSON, XML, YAML)
       • Removes the “built-in” timestamp field from bundles.
       • Resolves semantics of timestamps for use in temporal processing.
       • Resolves syntax and semantics of address pattern expressions in messages.
       • Removes requirement for backtracking in processing of the address pattern syntax.
       • Resolves an oversight in the definition of the address pattern numeric range expression so that multi-digit index selectors can be used.
       • Removes the need for a framing protocol when used with a serial transport.
       • Simplifies the packet format so there is only one class of valid packet which is the bundle.
       • Resolves ambiguous semantics of nested bundles.
       • Removes all type-tags that implicitly carry data (I,T,N,F)
       • Removes all improper types from the list of optional types ("improper" is any type with implicit semantics, such as "RGB color")
       • Adds a version number to the packet header to support future format revisions.
       • Adds an options bit-flag field to the packet header to support different low-level encodings.
       • Enables encoding of data sections with big- or little-endian byte order.
       • Enables encoding of data sections with 4-, 8-, 16-, or 32-byte alignment.
       • Adds an optional packet checksum.
       • Adds float and integer 64-bit numeric types as mandatory types to the standard.
       • Adds a comprehensive collection of optional types.
       • Enables encoding of structured content by using nested bundles.
       • Adds lookup-table string compression for bandwidth-constrained applications, whereby address and/or typetag strings may be substituted by a numeric identifier.
       • Adds a set of predefined addresses and their protocol semantics to enable discovery, enumeration and control.
       • Provides a file-format using IANA MIME.
       • Provides transformations to other content formats including JSON, XML,YAML.
       • Provides recommended practice for ad-hoc discovery on Layer 2 (how???) and Layer 3 (Zeroconf??)
---
Andy W. Schmeder
email: andy [at] cnmat.berkeley.edu
skype: andy.schmeder
mobile: +1-510-717-6653
Programmer/Analyst II
Research Group
Center for New Music and Audio Technologies
University of California at Berkeley
http://cnmat.berkeley.edu
_______________________________________________
OSC_dev mailing list
http://lists.create.ucsb.edu/mailman/listinfo/osc_dev
--
arttech.nl | oscseq.com | smadsteck.nl
Adrian Freed
2011-11-06 16:06:19 UTC
Permalink
Post by Mattijs Kneppers
Hello Andy,
I had the pleasure of meeting John MacCallum at the Cycling '74 Expo
in New York a few weeks ago and apart from exchanging ideas about
object orientation in Max we chatted a bit about OSC and its future.
He mentioned that you folks don't have structural funding for the
development of OSC. It still seems strange to me that an important
development like this that is fundamentally changing the music and
show industry is not supported by a structural means of existence, but
it makes me even more happy to see that you are still actively working
on improving the specification.
Thanks for your encouraging words. CNMAT is still very active in OSC work with John, Andy, Yotam, Ian and myself
all involved quite often with OSC issues. For example, Yotam is implementing an OSC example for the Ethernet Arduino
this week. I am working on an application which aggregates multiple OSC streams from USB into a single
high rate stream.
The lack of specific funding for OSC standards development is a reflection of
the current context for such development. Early OSC work happened before large scale development of the internet
and xml etc. Now people have a bewildering choice of protocols, and new issues to grapple with
and every idea is now measured against a large number of alternatives.

CNMAT's current approach
is to support the most active users of OSC to understand future applications better. We are supporting
various IEEE efforts to make sure incorporation of OSC goes smoothly.
I have stated that OSC 2.0 is probably too big an effort for CNMAT alone and
should be done by a community of users.
There isn't a very good model for how to proceed. W3C and MMA are both famously late and slow
at such things.
The IEEE is very interesting because things ethernet-related get so deeply embedded in the infrastructure.
Traditionally representation there is industry oriented rather than academic.
Post by Mattijs Kneppers
I currently feel especially attached to your developments since I'm
working on an OSC sequencer (oscseq.com) that I hope will one day
evolve into or at least pave the way for a compositional tool that
replaces the time lines out there that are all based on the MIDI way
of thinking, which I find a particularly limited, harshly quantized
and pre-structured mindset.
Good idea!
This is a challenging project. John and I have used this sort of application to guide
the design of odot. The idea is that it should be easy in max/msp to write a sequencer
with the right primitives.
Post by Mattijs Kneppers
As a reaction to your question whether I miss anything; I don't as far
as I can see at a first glance, but I am highly interested in the
specifics of your ideas about time tags in the new specification. Any
serious OSC recording or timeline application will rely heavily on a
correct implementation of time tag handling in OSC controllers and
destinations, so I'm hoping that the new specs will encourage OSC
developers to start using time tags in their implementations.
I am afraid it is not the specificaiton that will encourage this. We have broadened
the applications time tags can be used in but until the mainstream OS have primitives
to support timely computation we can't implement them.
I believe it will be possible to implement time tags with recent Linux implementations and OS/X Lion
probably has what i necessary because of AVB support but the OSC integration into AVB is still a work in progress....
Mattijs Kneppers
2011-11-06 18:24:43 UTC
Permalink
This is a challenging project. John and I have used this sort of application to guide
the design of odot. The idea is that it should be easy in max/msp to write a sequencer
with the right primitives.
It certainly is an interesting challenge. I personally found that the
threading model of Max has its limits when it comes to setting thread
priorities, i.e. refreshing GUI vs acquisition of data vs user edits
vs generating events.

I still have to look into the latest developments of o.dot that John
sent me, which I'm still saving until I'm sure I can reserve a proper
amount of time to give you useful feedback.
I am afraid it is not the specificaiton that will encourage this. We have broadened
the applications time tags can be used in but until the mainstream OS have primitives
to support timely computation we can't implement them.
I'm not sure I get what you mean here, I assume you're not talking
about a native datatype for 64 bits of unsigned data (NTP) which
indeed some development environments lack?

But even without a dedicated mechanism to synchronize streams and
events across applications and hardware (like AVB, if I understand
correctly) I think that time tags do already provide a way to ensure
precise -relative- timing between events once they reach their
destination, as well as a determinate delay that can be compensated
for. At the source, in case of an application playing back a
composition, or at the destination in the example of a timeline
recording events from a hardware controller. In that sense, promoting
the use of time tags would already seem very useful to me. To put it
simple: TouchOSC doesn't send time tags. I wonder why?

Anyway, on the whole, I can imagine that you could use some input from
the community at any point. I would be happy to ponder on any specific
questions you may have.
_______________________________________________
OSC_dev mailing list
http://lists.create.ucsb.edu/mailman/listinfo/osc_dev
--
arttech.nl | oscseq.com | smadsteck.nl
Adrian Freed
2011-11-06 18:45:30 UTC
Permalink
Post by Mattijs Kneppers
simple: TouchOSC doesn't send time tags. I wonder why?
Because there is no OS primitive to correctly find out when things happen (i.e. the touch)
or to correctly schedule when things are supposed to happen at the target and because these things
haven't existed for years nobody even tries to look for them anymore. We have approximated such things
with some special Max/MSP externals Andy developed....

The irony is that OSC was designed in the SGI IRIX / Mac OS9 days. We did have those primitives back then and
the real-time features were standard to implement them. IRIX on the SGI O2, for example, guarantees timing accuracy to the analog audio output!
It also had a time tagged output queue for serial midi.
Mattijs Kneppers
2011-11-06 20:16:58 UTC
Permalink
Post by Adrian Freed
Because there is no OS primitive to correctly find out when things happen (i.e. the touch)
or to correctly schedule when  things are supposed to happen at the target and because these things
haven't existed for years  nobody even tries to look for them anymore. We have approximated such things
with some special Max/MSP externals Andy developed....
Hmm I see. I didn't really test the timing jitter that occurs when
time stamping events with the system time as they are acquired by the
os and become available to my program, but I always assumed that that
jitter would be far smaller than the unknown latencies introduced by
sending data over the network (or over midi cables for that matter).
But apparently you have a different opinion and I have the feeling you
looked into this more elaborately than I did. Time to start doing some
tests for myself.

Curious which externals you are referring to. Could it be OSC-schedule?

Cheers
Jeff Koftinoff
2011-11-06 21:21:21 UTC
Permalink
Accurate scheduling of time stamped OSC messages is hard. In order to get the guarantees you need even in an embedded linux system, you have to have some sort of real time extensions, as well as a very accurate synchronized clock.

The real time operating system is required because typical non-real time operating systems (even linux with SCHED_RR) can have surprising transient latencies.

Many people don't realize that without a real time o/s, gettimeofday() system call is always inaccurate because between the time you call it and the time when you actually do something with the result, the o/s may have paused your task for a few hundred milliseconds.

For clock synchronization, NTP doesn't cut it, but some possible profiles of IEEE Std. 1588-2008 may allow accuracies of 10 µs. The ideal situation is to use IEEE Std. 802.1AS-2011 (from AVB) which can allow for a few orders of magnitude better.

I am currently working with the IEEE Std. 1722-2011 group on an ammendment to allow 'AVB Control Streams'. These control streams would be bandwidth reserved AVB streams and would be able to encapsulate both stream-like protocols (like tcp sockets or serial ports) or packet based protocols (like IEEE 1722.1 or UDP). My intent is to also allow a EUI-64 value to describe the encapsulated protocol so that it may be either a IANA based TCP or UDP protocol, or others such as 1722.1 or Open Sound Control, or vendor specific protocols, including FlexRay and Canbus and other real time control protocols that are required for automotive use. Using AVB Streams for transporting OSC messages is very compelling because it makes these control messages 'first class citizens' for bandwidth reservation, traffic shaping, reliability, redundant streams, and accurate presentation time support.

I'll be making a very short presentation about this at AES San Diego conference on Audio Networking ( http://www.aes.org/conferences/44/ ) on Friday Nov 18.

Regards,
Jeff
Post by Mattijs Kneppers
Post by Adrian Freed
Because there is no OS primitive to correctly find out when things happen (i.e. the touch)
or to correctly schedule when things are supposed to happen at the target and because these things
haven't existed for years nobody even tries to look for them anymore. We have approximated such things
with some special Max/MSP externals Andy developed....
Hmm I see. I didn't really test the timing jitter that occurs when
time stamping events with the system time as they are acquired by the
os and become available to my program, but I always assumed that that
jitter would be far smaller than the unknown latencies introduced by
sending data over the network (or over midi cables for that matter).
But apparently you have a different opinion and I have the feeling you
looked into this more elaborately than I did. Time to start doing some
tests for myself.
Curious which externals you are referring to. Could it be OSC-schedule?
Cheers
_______________________________________________
OSC_dev mailing list
http://lists.create.ucsb.edu/mailman/listinfo/osc_dev
Andy W. Schmeder
2011-11-06 22:51:40 UTC
Permalink
Post by Mattijs Kneppers
Hmm I see. I didn't really test the timing jitter that occurs when
time stamping events with the system time as they are acquired by the
os and become available to my program, but I always assumed that that
jitter would be far smaller than the unknown latencies introduced by
sending data over the network (or over midi cables for that matter).
In my experience the random delays introduced by context-switching between user space applications can actually be much larger than the delays in hardware transmission and hardware interrupt servicing (at least for small networks). The embedded devices and low level things all tend to run with fairly low delay, its the bloated giant operating system that is the slug...
Post by Mattijs Kneppers
But apparently you have a different opinion and I have the feeling you
looked into this more elaborately than I did. Time to start doing some
tests for myself.
Eh yeah depends on what you're doing and what it needs; in our area the most demanding application is spatial audio rendering where good timing accuracy is critical to get the correct time alignment of wavefronts. That accuracy goal then puts requirements on the hardware timing precision to get the clock sync fine enough, etc etc.

I'm fairly convinced that for human-interface needs a timing accuracy better than about 0.5msec is sufficient, which should be achievable without special hardware (I think...)
Post by Mattijs Kneppers
Curious which externals you are referring to. Could it be OSC-schedule?
Yeah probably, I did some other externals as well that schedule events in the audio domain using timestamps, i.e., given a timestamped (future) message puts a click at the correct sample in the audio stream, but these are pretty experimental so they were never released publically..


---

Andy W. Schmeder
email: andy [at] cnmat.berkeley.edu
skype: andy.schmeder

Programmer/Analyst II
Research Group
Center for New Music and Audio Technologies
University of California at Berkeley
http://cnmat.berkeley.edu
Mattijs Kneppers
2011-11-07 11:06:12 UTC
Permalink
Hi Andy,

As far as I can tell there are currently two important scenario's
where time stamps in their current form are relevant:

1) Recording controller data
OSC messages that are normally used to control things in real-time are
now to be recorded. In this case a user has decided that the
(embedded) device he's using for a controller is accurate enough to
translate his physical gestures to digital events properly in
real-time situations. If we time stamp the messages as they are sent
out, we preserve this acceptable accuracy. When the messages arrive at
the recording app, it places the events on a timeline based on their
time stamps (oscseq does this) meaning that the OS's system time isn't
involved in this process.

2) Playing back sequenced data
OSC messages that are composed on a timeline by a user are to result
in an accurate audio representation in a different application. The
timeline sends out messages 10 ms in advance (oscseq does this) and
adds the time stamp at which they should be played back. When the
messages arrive at their destination, the audio generator schedules
them to be inserted in the dsp chain at the appropriate time, as your
experimental max object must have done. The system time of the OS is
never used.

These two scenario's seem valid to me, are they not?

I recently had a case with TouchOSC where controlling audio in Max in
real-time felt sluggish and the recorded messages showed gaps in what
should be a continuous streams of events. It appeared after a while
that the problem was the poor wifi connectivity of the iPad. Had
TouchOSC added time stamps to its messages, the real-time control
might have still felt sluggish, but the recording would have been
tight, at least as tight as expected from iOS' native timing.


The situation in which I can see the inaccurate system time becoming a
problem is mainly scenario 1 when the controller is actually a PC, for
example when converting joystick movements to OSC messages.

Having touched the subject of WFS myself some time ago
(http://bit.ly/vvW077) it would seem to me that your situation of the
alignment of wavefronts could be a case of scenario 2, except for when
you're working with multiple computers rendering the audio output. In
that case you have the issue of aligning computer clocks.

Best,
Mattijs

On Sun, Nov 6, 2011 at 11:51 PM, Andy W. Schmeder
Post by Mattijs Kneppers
Hmm I see. I didn't really test the timing jitter that occurs when
time stamping events with the system time as they are acquired by the
os and become available to my program, but I always assumed that that
jitter would be far smaller than the unknown latencies introduced by
sending data over the network (or over midi cables for that matter).
In my experience the random delays introduced by context-switching between user space applications can actually be much larger than the delays in hardware transmission and hardware interrupt servicing (at least for small networks).  The embedded devices and low level things all tend to run with fairly low delay, its the bloated giant operating system that is the slug...
Post by Mattijs Kneppers
But apparently you have a different opinion and I have the feeling you
looked into this more elaborately than I did. Time to start doing some
tests for myself.
Eh yeah depends on what you're doing and what it needs; in our area the most demanding application is spatial audio rendering where good timing accuracy is critical to get the correct time alignment of wavefronts.  That accuracy goal then puts requirements on the hardware timing precision to get the clock sync fine enough, etc etc.
I'm fairly convinced that for human-interface needs a timing accuracy better than about 0.5msec is sufficient, which should be achievable without special hardware (I think...)
Post by Mattijs Kneppers
Curious which externals you are referring to. Could it be OSC-schedule?
Yeah probably, I did some other externals as well that schedule events in the audio domain using timestamps, i.e., given a timestamped (future) message puts a click at the correct sample in the audio stream, but these are pretty experimental so they were never released publically..
---
Andy W. Schmeder
email: andy [at] cnmat.berkeley.edu
skype: andy.schmeder
Programmer/Analyst II
Research Group
Center for New Music and Audio Technologies
University of California at Berkeley
http://cnmat.berkeley.edu
_______________________________________________
OSC_dev mailing list
http://lists.create.ucsb.edu/mailman/listinfo/osc_dev
--
arttech.nl | oscseq.com | smadsteck.nl
Kaspar Bumke
2011-11-07 16:25:41 UTC
Permalink
Post by Andy W. Schmeder
• Removes the need for a framing protocol when used with a serial
transport.
Could you elaborate on this point? As I mentioned in the other thread I am
quite interested in OSC getting a standard way for USB devices to
communicate with a host. Does this mean SLIP is no longer the recommended
way to do this? What is the alternative that allows the removal of the
framing protocol?
Andy W. Schmeder
2011-11-07 17:55:17 UTC
Permalink
SLIP or length-prefix framing will still be needed for OSC 1.0 / 1.1.

In OSC 2.0 the intention is to require that all packets are bundles which removes some ambiguities in the parsing, so that its possible to simply concatenate packets without anything special in between.

But SLIP framing is still recommended for situations where the stream can be interrupted, such as USB-Serial. Length-prefix framing can help with skipping past packets in a file. So its not entirely clear we can do without these things anyways.
Post by Andy W. Schmeder
• Removes the need for a framing protocol when used with a serial transport.
Could you elaborate on this point? As I mentioned in the other thread I am quite interested in OSC getting a standard way for USB devices to communicate with a host. Does this mean SLIP is no longer the recommended way to do this? What is the alternative that allows the removal of the framing protocol?
_______________________________________________
OSC_dev mailing list
http://lists.create.ucsb.edu/mailman/listinfo/osc_dev
---

Andy W. Schmeder
email: andy [at] cnmat.berkeley.edu
skype: andy.schmeder

Programmer/Analyst II
Research Group
Center for New Music and Audio Technologies
University of California at Berkeley
http://cnmat.berkeley.edu
Andy W. Schmeder
2011-11-06 21:59:58 UTC
Permalink
Post by Mattijs Kneppers
As a reaction to your question whether I miss anything; I don't as far
as I can see at a first glance, but I am highly interested in the
specifics of your ideas about time tags in the new specification. Any
serious OSC recording or timeline application will rely heavily on a
correct implementation of time tag handling in OSC controllers and
destinations, so I'm hoping that the new specs will encourage OSC
developers to start using time tags in their implementations.
The new scheme provides explicit semantics for a timestamp, the currently proposed list is:

/osc/time/presentation
/osc/time/acquisition
/osc/time/expiration
/osc/time/duration

Basically instead of having a "builtin" timestamp, we just include zero or more of these messages in a bundle along with a timestamp (which may be NTP or maybe some other format like TAI or ISO8601).

"presentation" indicates when a message should be actuated (e.g. at what point in time a sample should emerge from a loudspeaker)
"acquisition" indicates when data was acquired, for sensors
"expiration" gives a deadline after which the message should be dropped
"duration" modifies a timestamp to represent a time interval, e.g. a sensor acquisition sampling rate etc.

Real-time dataflow networks (e.g. Ptolemy) similarly differentiate between acquisition time and presentation time. As does Mac OSX IOKit callbacks, especially CoreMidi and to some extent CoreAudio, although its up to the hardware driver to correctly inform the OS regarding the actual input-output delay in the hardware and its not obvious that any of this actually works properly. Additionally the clock sync error is usually higher than would be optimal (which is only solveable by hardware modification unfortunately to the network card). IOKit returns values from an internal nanosecond timer (mach_absolute_time_t) and some significant math is needed to convert that to a UTC time. So as you can see the actual implementation of this stuff is a bit difficult.

Thats all for now.. keep ya posted on future notes.

A.


---

Andy W. Schmeder
email: andy [at] cnmat.berkeley.edu
skype: andy.schmeder

Programmer/Analyst II
Research Group
Center for New Music and Audio Technologies
University of California at Berkeley
http://cnmat.berkeley.edu
Loading...