Jamoma DSP and AudioGraph questions

classic Classic list List threaded Threaded
4 messages Options
Reply | Threaded
Open this post in threaded view
|

Jamoma DSP and AudioGraph questions

caseyjames

Hey hey,

I've just finished the white papers for Jamoma DSP and AudioGraph.  Now that I have a better idea of what they offer, I am very excited about digging into it.  Using the Jamoma DSP as a standard to develop ugens and allow others to contribute their own is very exciting.

I have a few questions about the system.

Is there any facility for loading sound files into buffers that I can access in the C++ aspects of the software?  I just finished reading papers about Supercolliders SCSynth and Supernova. The Ross Bencina SCSynth paper goes into depth about their approach to bouncing back and forth between realtime and non realtime threads to load sound file data into buffers and streaming to disk using first in first out queues.  Is that a concept that is handled by Jamoma or would a similar approach using a realtime and non realtime graph be sensible in the context of the Jamoma workflow if that needs to be built?

Can ugen graphs be constructed that only process midi type information?  Would one just use the ugens as is, relay the inputs to the outputs and handle the midi in the process function?

For the iOS App Store where only static linking of external libraries is allowed, there is no conflict with the AudioGraphs method of dynamic linking correct?  The AudioGraph is more about passing around points?

Is it feasible to build the AudioGraph functionality directly into a App so that it is not built as an AudioUnit plugin, but as a central component in the app?  I feel a little nervous about haven the entire audio system of the software as a plugin, but maybe that fear is unfounded, is there any reason to try to integrate the graph directly into the app instead of a audio unit plugin?

Are there any simple example projects demonstrating a simple AudioUnit like a filter ugen and graph to patch audio in and out?  Is there any documentation on how to use them beyond the header files?

Any clarification or direction would be greatly appreciated.

Thanks,

Casey


------------------------------------------------------------------------------
Live Security Virtual Conference
Exclusive live event will cover all the ways today's security and
threat landscape has changed and how IT managers can respond. Discussions
will include endpoint security, mobile security and the latest in malware
threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/
_______________________________________________
Jamoma-devel mailing list
[hidden email]
https://lists.sourceforge.net/lists/listinfo/jamoma-devel
Reply | Threaded
Open this post in threaded view
|

Re: Jamoma DSP and AudioGraph questions

Trond Lossius
Administrator
Hi Casey,

Others may be able to fill out the picture further, but I'll try answer to the best of my knowledge:

On Aug 22, 2012, at 8:02 PM, Casey Basichis <[hidden email]> wrote:

> Hey hey,
>
> I've just finished the white papers for Jamoma DSP and AudioGraph.  Now that I have a better idea of what they offer, I am very excited about digging into it.  Using the Jamoma DSP as a standard to develop ugens and allow others to contribute their own is very exciting.
>
> I have a few questions about the system.
>
> Is there any facility for loading sound files into buffers that I can access in the C++ aspects of the software?  I just finished reading papers about Supercolliders SCSynth and Supernova. The Ross Bencina SCSynth paper goes into depth about their approach to bouncing back and forth between realtime and non realtime threads to load sound file data into buffers and streaming to disk using first in first out queues.  Is that a concept that is handled by Jamoma or would a similar approach using a realtime and non realtime graph be sensible in the context of the Jamoma workflow if that needs to be built?

In Jamoma DSP there's a SoundfileLib extension using libsndfile. I've not looked into the code of this myself so far, but I'm pretty sure that it has asynchronous file access.

Looking at the header of libsndfile I see that it is licensed using the GNU LGPL v.2.1. The dependency on this library was probably added at a time when the Jamoma libraries themselves were licensed using GNU LGPL. We lated moved all of the library to the more permissive BSD license, and it might be that we need to reconsider whether we can still use libsndfile, in particular in a statius linking situation as is the case with iOS.

> Can ugen graphs be constructed that only process midi type information?  Would one just use the ugens as is, relay the inputs to the outputs and handle the midi in the process function?

Yes, this can be done using the Jamoma Graph library. Jamoma Graph provides a structure for setting up asynchronous graphs (Tim, please correct me or add if I'm wrong or incomplete here). AudioGraph builds on Graph by extending the graph with a synchronous graph structure for audio processing.

> For the iOS App Store where only static linking of external libraries is allowed, there is no conflict with the AudioGraphs method of dynamic linking correct?  The AudioGraph is more about passing around points?

No, I don't think that is an issue. The libraries can themselves be statically linked.

> Is it feasible to build the AudioGraph functionality directly into a App so that it is not built as an AudioUnit plugin, but as a central component in the app?  I feel a little nervous about haven the entire audio system of the software as a plugin, but maybe that fear is unfounded, is there any reason to try to integrate the graph directly into the app instead of a audio unit plugin?

I believe Tim will need to answer this one.

> Are there any simple example projects demonstrating a simple AudioUnit like a filter ugen and graph to patch audio in and out?  Is there any documentation on how to use them beyond the header files?

As far ass I'm aware, we only have one iOS related example so far. In Foundation there's an Xcode project for a pretty mediocre iOS app. The screen is gray, and it's doing a temperature conversion using the Dataspace extension, and then post the result to console. So at least it's a (ver modest) beginning.

In terms of AU-relaterd examples, I would suggest look into Plugtastic, provided that you have access to a Max license. I believe one of the Max externals (either jcom.in≈ or jcom.out≈) was broken in the past few weeks in a hurdle to get stuff working with OS 10.8 and Xcode 4.4, but hopefully this can be fixed in the near future, in particular if someone needs it.

When Plugtastic works, it can be used to build AudioUnit plugins. In doing so it is creating an Xcode project in the temp folder, and then compiles it. That temp project can then be retrieved and inspected to get an idea of how AudioGraph code can be set up and used in other projects as well.

Cheers,
Trond
------------------------------------------------------------------------------
Live Security Virtual Conference
Exclusive live event will cover all the ways today's security and
threat landscape has changed and how IT managers can respond. Discussions
will include endpoint security, mobile security and the latest in malware
threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/
_______________________________________________
Jamoma-devel mailing list
[hidden email]
https://lists.sourceforge.net/lists/listinfo/jamoma-devel
Reply | Threaded
Open this post in threaded view
|

Re: Jamoma DSP and AudioGraph questions

Nils Peters
In reply to this post by caseyjames
Hi Casey,

On 12-08-22 11:02 AM, Casey Basichis wrote:
> Is there any facility for loading sound files into buffers that I can
> access in the C++ aspects of the software?  I just finished reading
> papers about Supercolliders SCSynth and Supernova. The Ross Bencina
> SCSynth paper goes into depth about their approach to bouncing back and
> forth between realtime and non realtime threads to load sound file data
> into buffers and streaming to disk using first in first out queues.  Is
> that a concept that is handled by Jamoma or would a similar approach
> using a realtime and non realtime graph be sensible in the context of
> the Jamoma workflow if that needs to be built?

In the master branch of JamomaDSP, the soundfile player and recorder are
single threaded, which is non-ideal:  http://redmine.jamoma.org/issues/727

Last summer I've worked on a two-threaded version for the soundfile
recorder which uses a high priority thread for buffering and a
low-priority thread to write the data to disk.
I don't recall if I had the time to implement this for the soundfile
player as well, but feel free to take a look at

https://github.com/jamoma/JamomaDSP/blob/np-727/extensions/SoundfileLib/TTSoundfileRecorder2.cpp



cheers,

Nils




------------------------------------------------------------------------------
Live Security Virtual Conference
Exclusive live event will cover all the ways today's security and
threat landscape has changed and how IT managers can respond. Discussions
will include endpoint security, mobile security and the latest in malware
threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/
_______________________________________________
Jamoma-devel mailing list
[hidden email]
https://lists.sourceforge.net/lists/listinfo/jamoma-devel
Reply | Threaded
Open this post in threaded view
|

Re: Jamoma DSP and AudioGraph questions

caseyjames
This post was updated on .
The lib sound bit is touchy for iOS, I've been digging around for the last few days in search of a permissively licensed cross platform sound file handling library with very little luck. I'm still looking around though.  The best I've found are:

https://github.com/aaronblohowiak/EasyStereoWaveFileHeader
https://github.com/chathhorn/simple-wave

I'm going to look to see if any permissively licensed applications have implemented something on their own that was never extracted into a stand alone library.

I am glad to hear the graph classs can be used to handle asynchronous graphs.

I have been trying to get up and running with a dependency injection library for the general functioning of my app.  Could the use of the graph class be generalized to handle that part of the behavior?  I'd love to open the app up as much as possible to substituting and adding plugin components (statically linked in iOS) by relying on the open interface that jamoma presents but is it set up for that kind of task?  This would be handling business logic etc.  I know there is some max based unit testing going on, right now the plan is using googletest.

I don't actually have MAX/MSP would it work to run the runtime to export a Xcode project?

Ill check the TTSoundfileRecorder2 code, soon just finishing the tail end of a project before i dive in with all of this.