I think you are right, it came for F90 it’s just a hint for a compiler, so it can catch errors or optimize better.
I assume not, as it would have broken a shitloads of old code. PL1 programming language also did not have reserved words.
last time I played with Fortran IV was around 1975, I wrote a card deck that played chess (not well but if followed the rules)… and took a shitload of cards each move …. so I could only afford to play one game…
Havent touched either fortran or pl/1 in more than 20 years
So as I understand it, to get behavior that’s typical in other languages you have to use:
func foo(var x : Int)
I haven’t had to think in other languages about whether or not I plan to change a passed by value variable within the function but I suspect I do so more often than not and having that extra keyword in there sounds like a bit of a PITA. But I supposed I’d get used to it.
I don’t see what problem that args being constants by default actually solves, given that byval args don’t bleed through to the caller anyway. I would rather have to specify a const or readonly keyword if I really want to protect the incoming value within the func for some reason. I see no reason to make them immutable by default. It seems like a conceptual burden.
For Swift you’d have to ask Chris Lattner why he designed the language that way
My guess is he’d say it’s safer by default, forces you to think about what will and won’t be modified, but I have yet to conceptualize passing by value into a function something that is guaranteed not to change within the function. Maybe it’s because it’s not possible, and if it were, I’d find it useful. But then the question of which should be the default (and require you to type more characters) remains.
on the one hand, it would be faster if everything was just call by reference. on the other, your data is at the mercy of any functions you call. i suppose the best-of-both-worlds would be being able to set the default, say, with an environment variable or main default property, similar to languages that let you set case-sensitivity, base 1 or 0, etc. (but, boy, can some of those codebases get hairy. also, nothing should be base 0, and case-sensitivity is a bug not a feature)
We have the byval / byref dichotomy with the default usually being byval because byval DOES proetect your data from the called function and passing by value is actually faster from a performance standpoint. This Swift feature is about preventing the passed value from being changed within the function unless on purpose, which is a different issue and not something I’ve noticed is the source of bugs in my world. But maybe someone did an analysis and found that this extra bondage and discipline paid more dividends than I would think. IDK. I was just curious if anyone knew the thinking behind it. Swift isn’t going to change now, particularly for me since I don’t even use it at present.
Zero-based arrays have become so ubiquitous (even in VisualBASIC.NET) that I have just gotten used to it. And when a zero-based-array language occasionally calls into an API that is has one-based collections, I see people compensating for it to fit the usual patterns … e.g., having a for loop termination look at the collection count minus one) rather than take advantage of the 1-based collection.
In practice, 0 vs 1 based arrays isn’t a hill I care to die on and I only find it irritating or a source of bugs when I have to switch between dealing with the two in the same program or if I were switching between 0 and 1 based languages all the time. Which I’m not. I think 0-based, for better or worse, has won the battle.
As for case sensitivity, I do agree with you more wholeheartedly there. I have also gotten used to that (I mostly code in C#) but I have found that to be a small but distinct source of bugs more than a useful feature. And the obsessive side of me would have corrected a capitalization error that violated my coding standards, even in a case-insensitive language. Still and all, I’d prefer a case-insensitive compiler / language.
It took a little getting used to how Swift did things… and what I have done in the rare instance I wanted to do it “the Xojo way”
func foo( _x : Int ) {
var x = _x
// use x which is now a private variable
}
This is different from var in that, I guess, it doesn’t require you to use a positional / named arg when calling it, but apart from that … do you find constant by value args a problem in practice, or has it saved you from making mistakes in some way, or forced you to design things better? If so, how?
yeah the claim is that pass by value as a constant saves on coding mistakes
but if they are passed by value then you CANT get side effects in the caller so I dont see how treating it as a local constant actually eliminates or reduces errors as claimed
probably one of those things where a long conversation with a person like Chris Lattner might illuminate things
I think it’s mainly for concurrency safety, removing the need to lock.
Improved safety and predictability: Once a parameter is immutable, it cannot
be changed throughout the function or program flow, preventing accidental
or unintended modification and side effects. This makes code more reliable
and easier to reason about since the parameter’s value is guaranteed to stay
the same.
Thread safety: Immutable parameters are inherently thread-safe because multiple
threads can access them without synchronization concerns or the risk of
concurrent modification. This avoids complex bugs related to race conditions
in multi-threaded environments.
Easier debugging and testing: Since immutable values do not change, debugging
becomes simpler because the state is consistent and changes in the code cannot
alter parameters unexpectedly, allowing easier traceability of data flow and
pinpointing errors.
Reduced side effects: Immutability minimizes hidden side effects by ensurin
g functions operate without changing inputs, which aligns with functional
programming principles, promoting pure functions and more maintainable code.
Some of these I dont tend to buy without evidence
Cant say I’ve ever run into unintended side effects in code that defaults to pass by value but ….
Someone somewhere probably has research to back these claims up
And in the end it really doesnt matter
Swift IS that way - like it or not it IS
Debate here isnt going to change that
And C++ is the way it is
And Java is the way it is
And Xojo is the way it is
Until/unless we all write our own languages with our own conventions we use what we’re given
wasn’t attempting to start a debate, or to say that one way was better than another, just trying to point out perhaps the reasons that Swift was designed as it was, as I am sure each language had its own design reasons. But it is important to know WHAT those differences are so you don’t get bit in the ass if you change languages and expect it to be the same
Not me.
I come from a JavaScript and C# background, so the case-insensitivity of xojo was an irritant to me, of the language itself (and the optional parenthesis). At first because I thought it made the code look sloppy and less readable, especially when reading other people’s code.
More recently, when I started learning Swift, I discovered I’d grown lazy as a result of writing xojo for so long - when I discovered that the keywords self and Self (capitalized) mean two very different things in Swift.
Note: I’m not arguing your preferences, just making an observation ![]()
This strikes me as the best justification, although, I have spent all my time thus far synchronizing on access to fields, not function internals. In fact now that I think of it, it’s just an overall runtime design issue. In .NET runtimes, each thread has its own stack frame, so private (non-static) variables, which function parameters that are passed by value are, would be thread safe anyway. There are some edge case exceptions involving anonymous functions and lambdas, but normally in C# you only have to concern yourself with syncing on fields that your code might access.
So perhaps this design decision in Swift was because they wanted to guarantee thread safety at the language level regardless of the runtime implementation. But I don’t see a practical difference because AFAIK in .NET all runtimes make the same guarantees and Swift only has one “runtime”.