Bugs per KLOC

A conversation on TOF got me wondering what Apples bugs per KLOC was vs Xojo’s ?
We’ll probably never know - regardless of how fun that would be to know

BUT - how do you measure quality and what processes do you and your teams use to measure and improve quality ?

Design your initial target with care.

  1. Set a milestone (features to include, bugs to squash). Include ALL severe bugs known, if any. Include all bugs affecting the milestone, if any. Cherry-pick diverse bugs if any. Do we have space for new features? Include new features if we can.

2: Pick an item. 3: Develop, do a unit test of some functionality, fail? Goto 3. Is the feature incomplete? Goto 3. Do integration test, fail? Goto 3. Integrate/update the feature. Create or update an automation test to detect something broken here in the future.

Keep doing #2 until milestone reached, no items left.

Run the complete set of tests. Fail? Log new bugs and Goto 2.

Release software. // Obs: Release often, pursue the zero bug nirvana.

Collect feedback and log new bugs found. Goto 1.

1 Like

not sure that really addresses the question I posed

how do you measure quality and what processes do you and your teams use to measure and improve quality ?
what you said is more “here’s HOW we do what we do” but isnt “how do we know if we’re getting better or not?”
software is unlikely to be perfect
how do you measure when its “good enough” ?
and how do you measure if you’re getting better or worse with the process and the software ?

The Market Measure for quality is “what’s the TPC of the software” (Total Percentage Coverage). Code Coverage is the % of a software that has automated tests attesting the perfect functionality of such parts. There’s no 100% here, but if you get something higher that 75% you have a very well tested software guaranteeing a high level of quality.

How to improve? Seek a TPC above 85%.

How do you measure if you’re getting better or worse?
Look at this number over time, going towards 100% you are getting better, going towards 0% you are getting worse.

I don’t use a metric to measure code quality:

a) I don’t have time.
b) I don’t have the tools.
c) I don’t think that measuring code is super important.

I mostly do what Rick does: squash bugs, release often.


Same here


Ah! The rapid release model! :yum:

If well executed, really a powerful model. When I do such a release, it is ONLY bug fixes, not introducing new features. And that is where Xojo goes wrong.


In other thread someone cited GAMBAS, a BASIC interpreter for Linux. You see they doing what Xojo did not. Every release, even minor releases, are followed by 1,2,3 even 4 enhancement releases (bug fixes, a tweak here and there, some cosmetic adjust). The purpose is pursuing what I call “the zero bugs Nirvana” and a better UX of the current features. When they are satisfied with that “perfection” they include new features. Every small company well managed does it. What Xojo tries to do, is what large companies do, but they can’t, they can’t afford. A large company can have a software with 3000 known bugs, because they know every one of those bugs and have a plan for them. They know that bug x is there and will be there until finish feature x and kill bug z, then just after it, they will kill bug x, and it will be at some point next month (for example). And even most bugs only occurs in the non-stable branch not being used by people. Large companies have channels. Usually 3 layers of tests in constant workflow before something being released into the stable channel (canary->dev->beta->stable). You, for example, can’t create a Flutter Windows App right now using the stable channel, Google have not approved it yet, but as a dev and not an end user, you can set your tooling to the dev channel and play with such alpha contents. How they can achieve such thing? Well… Google have 25k engineers + the community. Xojo can’t afford such level of control, so the way to minimize being hurt by bugs is pursuing the zero bugs nirvana faster than the new features dream.