Databases & JSON

SQLite, MariaDB ( and undoubtedly others ) can return rows from a select query already formatted as JSON.

A SQLite3 example:

select json_group_array(json_object( 
'id', id, 
'surname', surname, 
'forenames', forenames, 
'joined', joined 
 )) from members order by id

This returns rows in the format:

[
{ "id": 1, "surname": "Simpson", "fornames": "Homer", "joined": "2024-12-25" },
{ "id": 2, "surname": "Simpson", "fornames": "Lisa", "joined": "2025-01-01" },
...
]

Which is exactly the format some web grids ( e.g. Tabulator ) require.

table.setData(rows);

i’m pretty sure that every db engine can do this.
someone on a another dev tool forum pointed out that json is extremely inefficient/verbose.
think about it: for every column, for every row, you are including the name of the column. i back-of-the-enveloped that roughly 40% of the traffic will be overhead, between repeated column names and delimiter jigglies. it’s probably more like 50-55%, but it’s somewhere in the “way too much” zone.
i think the solution is to move the data via a tsv/csv format, instead.

  • one character delimiter between columns (unless you go traditional csv, where it would be three characters)
  • somewhere between one and four character delimiters between rows
  • do you really need to/want to include column names? you only have to include them in one row, one time, per query. if the data is paged, you don’t have to repeat that row.

You don’t have to repeat the column names with JSON either, I was demonstrating how to retrieve the data in the format expected by many web grids.

To make the result more compact:

select json_group_array(json_array( 
id, surname, forenames, joined 
 )) from members order by id

Then the result would be in the format:

[
[1, "Simpson", "Homer", "2024-12-25"],
[2, "Simpson", "Lisa", "2025-01-01"],
...
]

In my use case the results are not from a DB query on the remote system
They’re returned from an API call
Like an iOS app that sends a query to a remote API that gets back a JSON data set
No direct connection to the DB

On the receiving end there’s no “nice” way to handle that data as if it was in fact a recordset/rowset in a Xojo baed app

But, with this, there is

Its maybe an hours worth of work and I’m sure I can/ could trim down the json to be much more efficient

In both Xojo and PHP I’ve been converting the results to a Dictionary aka Assoc Array. Just so easy to swap between the Dictionary and JSON.

It reminded me the “Remote Database Connector” of B4X:

Nonsense. csv is just a plain text prone to a LOT of errors. Also MOST API ussage is to return single rows so you end with the same amount of data, a set of header and a row but in a crapier format.

Returning larger datasets, HTTP2 and compression make the overhead almost negligible.

And finaly, for datasets really big or when high efficiency and speed is needed you simply dont use a text based protocol. Binary JSON or Protobuf are an option for this.

exactly

with this it comes in as json from the API result, you create the json backed row set with that, and voila its database record code again !

iterate across the rows & columns

:slight_smile:

1 Like
  • does json have error correction? i must have missed it.
  • i don’t know why you would ever return column titles, unless the API is a mystery, or returns random columns. otherwise, you wrote the API, so you don’t need the column titles, unless it makes you feel better to ship them, with the data.
  • i’m not sure why you would do single-row queries. why aren’t you caching the data on the local device, and then just syncing, as necessary?

that has not been my experience. compression definitely speeds things up, but json is still much flabbier. i’m reworking an API, now, so this will be a fun little test to perform on 100k rows.

there are all kinds of tricks you can use, chunking/paging, async + sockets, etc., but json is still too bulky, compared to just sending the data, but, as i said, i’m currently reworking an API, now, so it will be fun to test combinations of other ideas, and see if this data set proves them to be reasonable alternatives.

Because with a grid like Tabulator or w2grid element names are used to map data to the relevant column in the grid. For example in Tabulator you’d initialise your columns:

columns:[  //define the table columns
    {title:"Name", field:"name", editor:"input"},
    {title:"Task Progress", field:"progress", hozAlign:"left", formatter:"progress", editor:true},
    {title:"Gender", field:"gender", width:95, editor:"list", editorParams:{values:["male", "female"]}},
    {title:"Rating", field:"rating", formatter:"star", hozAlign:"center", width:100, editor:true},
    {title:"Color", field:"col", width:130, editor:"input"},
    {title:"Date Of Birth", field:"dob", width:130, sorter:"date", hozAlign:"center"},
    {title:"Driver", field:"car", width:90,  hozAlign:"center", formatter:"tickCross", sorter:"boolean", editor:true},
],

Then when you have some data to populate the table with it’s just one shot:

table.setData(records);

i’m assuming this is a xojo-thing. i’m a xojo n00b. the other dev tools we write apps with don’t need the columns specified in the data. they can use them, but they don’t need them.
plus, if you’re mobe-first (we are, more often than not), you’re populating from the local data, not the server.

No, nothing whatsoever to do with Xojo. It’s about using HTML5 libraries like Tabulator.

1 Like

moved everything here and removed my opening post

1 Like