Difference between Console and Desktop App

Hi Everyone,

I haven’t had a chance to create Console apps, and wanted to ask about differences between a Console app and a Desktop app. Specifically I am thinking about creating a 3D graphical model for subsurface Oil and Gas work. Currently the Desktop app is very slow, and my guess is that it is because of the IDE and other background processes running.

Is it possible to create OpenGL programs with a similar ease of a Desktop app? I would be creating my own window, which is easy-enough to do with declares. Are there any other issues that I am missing before I start to play?


Edit: I am just trying to get as much graphical performance out of Xojo as I can. Not sure if there will be much of a difference.

Will really depend on how much of Xojo’s graphics you need as they are different in console vs desktop
Are you using OpenGL via the Xojo plugin or directly via declares ?
A console app wont initialize some libs etc that a desktop app will and I dont know if that will affect your ability to make this set up work

Is it possible to have several console helpers that crnch the data and the desktop UI just front those with a decent UI ?
You can spawn many helpers (I’d try one less than as many cores as the machine has)

As for “OpenGL programs with a similar ease of a Desktop app” thats a pile of work from the console as there’s no windows, menu bars buttons etc etc etc
You have to make all that somehow

I would prefer to be able to use the Xojo plugin, and if that is not an option, then I have the ability to recreate all of the OpenGL commands in Declares - it would just take significantly more time.

The only way to see which libraries are not loaded is for me to play around with a few test programs.

When I wrote the OpenGL book, the trend was that Xojo was fast when it came to calculating the graphics, and math. Where Xojo seemed to be about 100-times slower was when drawing the graphics to the screen.

Yes, your right, and I am prepared to do that. It just depends on the patience and deep-pockets of the company. Its ‘nice’ to say you want good performance and quality, its quite another to have the project labour increase because of it. :slight_smile:

Just all questions I had to ask to try & make sure I dont give you bad advice

Often the bottleneck is in trying to crunch a lot of data very fast in a single threaded application and no amount of rewriting in OGL will change that

Its why Dr Stys application uses many helpers and shared memory to manipulate some extremely large graphics
But that is the kind of things that can be sliced up and potions dished out to several helpers and each of those can manipulate its “chunk” of the image and then the entire thing can be processed in parallel by those independent helpers

I dont know if what it is you’re trying to do lends itself to that within OGL ?

If not I dont know that JUST rewriting everything to avoid Xojo’s desktop framework is going to be sufficient
It may be but its a LOT of work to recreate a “desktop” app

Thanks for being cautious. This is an underground simulator where the user can see the problems within a formation in real-time. I can make the graphics look ‘minecraft-like’ to lower the resolution drawing time with OpenGL on the desktop, and a higher resolution has been requested.

One company uses a simulator that has license costs of upward capital of $200k per year, and the corporation would rather have their own in-house program. I am coming up with the estimation for the work.

Helpers would make a difference, as some of the graphics are for large areas which encompass about 10 km subsurface distances.

Just thinking out loud, maybe it would be easier to create the GUI portion of the program in Xojo with data entry, database information, etc in Xojo, and have the physical model created in an executable in C++ to increase framerates.

When I run an almost identical program in Xojo (OpenGL) with the same type of program in C++, no optimizations, I get about a 100x speed increase in C++. My computer has a high speed GeForce RTX 2080 GPU and it gets quite hot with Xojo OpenGL, and barely heats up in C++.

I would prefer to have the GUI and OpenGL graphics in one Xojo language. This is where the question came from for OpenGL in a console application.

I’ll play around with OpenGL in a Console program and see how it responds.

Thanks for your thoughts!

This sounds a LOT like Halliburtons underground simulator (I think it was halliburtons)
One of the oils had a room dedicated to this

OGL should as far as I know - be hardware accelerated using the plugin
That seems really odd that using Xojo the GPU would heat up

While it might be nice to have everything in one app if you get that kind of difference its hard to argue about
Could you have the OGL in C++ as the “helper” that display everything and Xojo as the front end for that ?
That would seem a reasonable balance of capabilities
Then you need something simple to pass front end config information to the C++ helper (but they can use ay one of several mechanisms for that)

The one that they are using is from Schlumberger (I used to work there). I would say that 99% of the capability is not being used since the original program has been overdesigned, and it takes a specialized person to put the data into, interpret, and work with the program. It is waaaay overcomplicated.

Yes, I definitely can use C++ as the helper program. It would have been a nice option to only program in one language for the project :slight_smile:

Yes, Xojo’s RAD would work well for this part of the program.

Its all good - This forum allows us to go on good tangents!

In theory (at least on macOS) you could use a helper with Metal (or OpenGL, but for how long I can’t say) and render your image to a IOSurface, which you can pass the ptr to the main app via IPC (requires macOS 10.12 or newer to do so).

Which your GUI can then draw to a Metal context or CALayer.

I’ve never actually tried this and I don’t know how effective it will actually be, as rendering on the Graphics Card to get good performance (you can request lower priority) will freeze the display while it’s busy. You can use different GPUs, but then you have to deal with transferring the memory between them.

1 Like

What voodoo magic is this @samRowlands? :slight_smile:


I’ve never actually used it.

1 Like