We were unable to load Disqus. If you are a moderator please see our troubleshooting guide.

George Michaelson • 6 years ago

One of the saddest moments of my career in computer centre helpdesk was talking to a chemical engineering student whose PhD basically evaporated in smoke, as I showed them the 'interesting' experimental results of their model were the outcome of using un-initialized global common in a huge fortran program they'd written.

Tomáš Mládek • 6 years ago

Interesting anecdote, but completely orthogonal to the discussion, as you could easily substitute any other language in place of FORTRAN and the message would be no different.

Adrian McMenamin • 5 years ago

Err... no

Erik Waters • 6 years ago

I'm not sure why you're complaining when he was contributing at the right angle

Alex • 6 years ago

"Even if old code is hard to read, poorly documented, and not the most efficient, it is often faster to use old validated code than to write new code."

In every single case I've dug into a legacy Fortran program that a scientist was using, I've found bugs that affected the correctness of its output.

"Hard to read" and "poorly documented" are basically code words for "buggy". You can't validate what you can't read. Computer scientists don't prefer clear code for some superficial reason. We prefer it because we've found it's the only way to produce programs that work.

This should come as no surprise to physicists. You value simplicity, too! Epicycles aren't wrong, exactly -- they're just excessively complex, and thus hide the true nature of the world. An excessively complex program hides what it's actually doing, too.

Dan Elton • 6 years ago

" "Hard to read" and "poorly documented" are basically code words for "buggy"" -- not necessarily.

What I'm saying is that in physics you often need codes that do very specific things, and sometimes those codes are only written in Fortran. I'm talking about two different things - first of all libraries for things like computing a complex error function (quickly!, because you need to do it thousands of times each time-step), but also stuff that would take months or more to replicate yourself in a different language. For instance, you might need to compute a very complex interatomic potential (which is very common in MD simulation). It's much easier to just call old code to do those tasks. I'm talking about code which has results published and is widely used. Yes it might have errors, but the chances seem low.

That said grad students and profs are terrible at documenting code because there is a "path of least resistance" mentality - people want to get the job done with minimal effort, and documenting takes extra effort (although personally I find it worth the effort even just for myself). I totally agree with you legacy code can be buggy - I also ran into that issue when working with a fortran code an undergrad wrote, and basically I had to rewrite the entire code to get it in working order.

Note that I wrote this post several years ago and given how fast NumPy is now under the hood, I highly recommend Python to physicists, coupled with writing code in Fortran for computationally intensive parts which can't be done with NumPy efficiently.

You're an idiot. You also know exactly nothing. Hint: C++ and C are completely different languages and literally every single code example of yours is decades out of date, wrong, buggy, and slow. Literally every single one, it's actually impressive how wrong you are and how little you know.

shayneo • 6 years ago

Rude

John Mark Madison • 3 years ago

As someone who likes C, but not C++, I would say these C++ examples look like C code to me. But I think it's hyperbolic to say the author knows "exactly nothing".

Dan Elton • 3 years ago

I do a lot of C++ these days but I admit I'm a very novice C++ coder and I was an extreme novice when I wrote this post. But the fact I'm "an idiot" at C++ proves my point. There's a billion different ways to do things in C++ which makes it really hard to read and understand other people's C++. In physics we often have to work with codes written by other people that are 5, 10, 20, or even 30 years old. Working with old C++ written by other people is HELL. Fortran by contrast is easy to learn, easy to read, and the language and how it is used has been used has been very stable since Fortran 90, so you don't have those problems!

joemmm • 4 years ago

I disagree. NASTRAN is old, well documented used for fluid mechanics. Yes some professor's own code may be hard to read with poor documentation and buggy. But I think that is on the individual. If the code had been written in C++ it probably still be hard to read and buggy. The principles of good programming techniques can be applied to FORTRAN as well as another other language. Comments are ignored by Fortran compilers. I am a physicist from the 70s, I was taught in school to comment at the beginning of each subroutine the purpose or function. Then what are input variables and their use, what are the output variables and their use and what are the internal variables and constants.

Yes in Fortran IV a lot of people did spaghetti programming because we didn't have block ifs. And the early IF could have 3 branches depending if the expression checked was negative, zero or positive.

I really suspect if a professor gives a student code to use, it has been tested to be correct. I know I tested each subroutine for correctness before it went into my main program. And the tested the main program for correctness using special and limiting cases where the answer is known at least for small populations. I also would use the IMSL functions. I found some of them didn't perform as advertise and I had to write my own subroutines. It might have been because I was using functions with poles in them. So in some regions the function was not analytical.

But overall most physicists I know goes to the extreme making sure their subroutines give the correct answer. Myself I check out each subroutine with special cases and at the limits of the range of the validity of the inputs as I write the program. But even before that I document the algorithm on paper. That and the documented source code goes into a 3-ring binder.

Lectem • 6 years ago

I think most of the arguments against C++ are not valid. They are valid against C, but C++ does not malloc free like you do (at least if not living in the 90's) it is even undefined behavior (though no compiler will ever take advantage of it).

You would just use vector or any good library with matrices (Eigen for exemple).

Allocation would be

MatrixType<double> matrix(columns, rows) ;

Deallocation : nothing, handled by destructor or can be done explicitly.

As for the pointer handling, if you are just doing math and physics, you don't need to deal with it most of the time. When you need, just use smart pointers.

The const part is also wrong but has been addressed by another comment.

My point is, the reason is mosrly because teachers are still behind. Seen it in many places, they do not do C++ but C with a C++ compiler.

Abhijat • 6 years ago

My previous commenting effort seems to have reached /dev/null :-(. Here it is again.

As an ex-physicist-turned-CS-guy who has used C, and Lisp, here are a few observations. (Warning: Longish read)

1. It is quite natural to focus on one's core discipline; Physics in this case. It is also quite natural to use results and approaches from other disciplines as "tools"; Mathematics and Computing in this case. It must therefore also be natural to examine the effectiveness of a given tool to a given problem.

2. Do physicists _only_ calculate values of their equations for some given conditions? Not at all. The "calculational" part appears more as a verification or a particular prediction. The equations are an attempt to capture the suspected "law(s) of Nature" from small phenomenological scales to large theoretical structures. The central activity, therefore, is to keep building formal structures and examining their validity under various conditions in the physical world. In fact, Physics and Computer Science complement each other in a very fundamental way: If CS is about taking a formal system and examining if it can be mechanised (i.e. made into a machine), then Physics is about taking a machine (with unknown laws), and figuring out its formal system.

3. It is traditional in Physics to focus on the "calculate values" part for the effectiveness in using a computer as a tool. What CS folks complain is about the reluctance to also focus on "equations" and "given conditions" parts for effectiveness. For a community that is the most rigorous about correctness in interpreting its equations with respect to the physical world, it is extremely surprising that it hastens to deploy computation as a tool for speed rather than correctness. Computation offers much more than just computing devices that calculate values.

4. The "equations" form the symbolic reasoning part, and the "given conditions" are about examining their interpretation under that context. "Interpretation" is being used exactly the way Logicians do. Physicists need to "play", i.e. "experiment",
with their "symbolic expressions and their interpretations". That computation could be effective even for the symbolic parts and the interpretational parts is an under appreciated idea for most practicing physicists.

5. This article focuses on concrete programming language issues in the Physics community. For a community that is at ease with the abstract, it would be far more interesting to focus on generic aspects of program expression techniques like control structures and data structures. This would push the debate from Fortran-C++ vs [[your favourite language here]] to the more productive one: how can we effectively use the techniques of program expression? And not only for "speed of calculating values"! CS folks have not presented such an approach towards their field, i.e. they have under appreciated the capabilities of physicists, capabilities that range from very high levels of abstraction to very high amount of concrete details. Physicists aren't "users, albeit very specialised". CS folks do need to talk to them at a much different level of ideas, than of (historical) efficiency of Fortran vs [[any other language]].

6. Could physicists use Lisp etc. (e.g. Mathematica) for working at high abstraction levels, and Fortran etc. for working at detailed evaluations? Of course, and much more deeply than just "calculating values". Newton's law of gravitation captures gravity by parameterising the values of the masses and the distance between them. It thus captures of the behaviour of these quantities independent of the specific values. This spirit of identifying invariant, and hence universal, "behaviour" (i.e. laws) is identical to the computational idea of an algorithm that separates a process (i.e. a behaviour) from the data that it may operate on. The reasoning techniques are deeply similar, and quite lost in debates such as this one over concrete languages.

The casualty is the ability to ascertain the effectiveness of a given tool towards one's purpose. The symptom is the "100% use Fortran for the next five years".

Dan Elton • 6 years ago

sorry, had to approve your post (unlike the others) for some reason.

The big difference I was emphasizing in this post is that in physics speed of the code is really important in addition to speed of writing the code, whereas in CS they often sacrifice speed to be able to work at a more abstract higher level, which makes code easier to write for complex applications.

Another point is there are differences in the underlying math used in CS and physics. (you don't use PDEs or calculus much in CS, and we don't use discrete math or logic much in physics)

Abhijat • 6 years ago

Thanks for approving it. :-). I am afraid you miss my point. So here's another attempt using an example.

The speed of execution will almost always be critical for Physics programs. The programs are often cutting edge or (algorithmically) complex. But these programs are typically called to evaluate a "theory" in some "given conditions", or in other words, to check the answers given by a theoretical system for some Physics question. Thus computer programs are used answer the Physics question: Do 13 Argon atoms freeze to an icosahedron under a Newtonian system with an LJ interaction potential?. But the LJ potential is an empirical one. To be more physically realistic, we must substitute the potential term by a more quantum calculation, e.g. using density functional theory. In that case, the formally more accurate theory gives a computationally more demanding program. Being convinced of the formal accuracy, physicists naturally turn to focusing on execution speed on machines, called computers, that execute their programs.

The point I am trying to make is that computation is different from computers, just as astronomy is different from telescopes (to use Djikstra's analogy). While focusing only on speed of evaluation is definitely necessary, computation offers far more than just that! Even with empirical potentials, it might be useful to write programs that operate at two levels: (a) high level abstraction, and (b) machine level evaluation. At high level of abstraction it would accept the _expression_ of the potential as input, perform the necessary differentiation to obtain the force expressions (in whatever suitable coordinate systems), and also the other quantities like momentum, velocity etc. using suitable integrators. At the machine level, it could take the results from its "high level part", and generate optimised numerical evaluator functions for the forces (and acceleration, velocity etc.).

The underlying mathematics and physical quantities used in Physics has to be discretized for use computationally. Among other effects, we now are forced to deal with numerical errors. It is very natural to think of a "normal" flow of computation when all is within the errors range, and an "abnormal" flow of computation when the calculations exceed the error range. For example, trajectories computed in the Argon 13 problem could "leak out" of the error range. Could the abnormal flow, e.g. using exceptions, be capable of restoring the numerics to within the error range (e.g. use a smaller time step?). A self correcting dynamics program would be much more useful to retain focus on the Physics.

My points are:
(a) Understanding computation expands the thinking to beyond computers-as-number-crunchers.
(b) Computer Science is different from the engineered artifacts called computers.
(c) It pays to look at generic mechanisms in programming languages than specific language patterns. One consequence is a code that is a pragmatic balance between readable (and hence increases human efficiency), and a machine efficient code.
(d) Both communities, as your post demonstrates, are still insisting on talking at low bit level details, when they both stand to gain if they _also_ start communicating at more abstract levels.
(e) What is worrisome is the stubborn refusal of both to move away from bit level details, despite individually engaging in some of the most abstract thinking humans indulge in.

Thanks for the attention.

Michael Powe • 6 years ago

I'm not a physicist, nor do I play one on TV. But, if I had to solve computationally intensive physics problems, I would use the most widely available and easily implemented tool at hand. That would seem to be Fortran. Or, FORTRAN.

FWIW, most of the arguments presented against using Fortran come down to, "it's old." Yes, the English language is old, by all means let's get rid of it and all start talking in Python. I knew a guy once who traded his car in for a new one every year. Seems legit. Who wants to drive an old car?

In programming, as in many other activities, fashion is as significant as effectiveness. I'm involved in such a project right now, in which a large company is converting its mobile apps to Angular, not because of any pressing lack of functionality in the existing apps, but ... just because Angular is "cutting edge."

An old saying is, "Get the right tool for the job." Not the next job, or the one you might have to do next year, but the one at hand. Works for me.

Robert Brockway • 3 years ago

Well said Michael. I select a tool based on its usefulness and don't care whether it is new or old. I do prefer mature tools which says something about age, but not much. As a result my preferred toolset includes some things that are very young and some that are very old.

It has always struck me as odd how people reject a tool merely because it is old. Perhaps some people are unaware that Linux is heading towards 30 and Windows (based on NT) is already past that point. Many other operating systems in use are of similar vintage as is the Python programming language.

Tiago D'Agostini • 4 years ago

Sorry, but if you do not know about a subject, do not say a word about it. Your example is COMPLETELY wrong on how things are done in C++. You basically wrote C, and bad C code.

THIS "

int ** array;
array = malloc(nrows * sizeof(double * ));

for(i = 0; i < nrows; i++){
array[i] = malloc(ncolumns * sizeof(double));
}

Is done in C++ as:

using Rows = std::vector<double>;
std::vector<rows> matrix(number_rows,Rows(NumberColumns));

and you do not have to write ANYTHING to deallocate it. As soon as matrix is out of scope everything is dealocated safely and without any chance of fergetting it.

Amir Shahmoradi • 3 years ago

Just for the knowledge of people visiting this page: There is no need for explicit deallocation in Fortran either. That has been the case since 1990 if not earlier.

Mach6 • 3 years ago

It’s just hilarious to read this. Comparing variable length array in fortran integer A(m,n) with vector in cpp std::vector<std::vector<int>> A(m, std::vector<int>(n))? I wound definitely prefer the first one. Moreover, the first one gives you a continuous memory allocation, while the second one doesn’t. This can make MPI communication much easier. For dynamic allocation with variable length, fortran is just A=allocate(m,n). Again, it’s much neater than malloc in C.

Tiago D'Agostini • 3 years ago

Beep another one that has no clue. std::vector has a contiguous memory allocation. No one would create a small matrix with vector of vector, but only with a signle vector and addressing operator solving all the issues. That said when you are handlign huge matrices you might simply force your system to fail if you try to force a single contiguous allocation, since there might not be a single contigous block of the size you want. If I am making a 9x9 matrix it shall be done with a single std::Vector. If you are creating a 5000x5000x5000 matrix, it is BAD development practice to do it with a single allocation.

Amir Shahmoradi • 3 years ago

Since Fortran 2003, there has been automatic allocation, which even obviates the need for explicit allocations. As of Fortran 2003, the following is a perfectly fine Fortran code,


integer, allocatable :: A(:), B(:)
A = [1,2]
B = [A, A + 2]
print *, "A =", A
print *, "B =", B
end

try it here:
https://www.tutorialspoint....

disqus_B1vk25qxNZ • 6 years ago

As first generation languages, both Fortran and Cobol are close to the machine and operating system with direct SVC calls common. Later languages gave all that to libraries, PL/I being an early example. But many PL/I library calls were expensive. I remember cases where you could improve performance by a factor of ten by avoiding certain parts of the library.

If students stick with Fortran they're much less likely to step into a library black hole.

I suspect that PL/I with Language Environment (LE) written by a knowledgeable programmer could be quite competitive with Fortran and C/C++ in performance.

That said, I don't know if parallel execution support has been added.

krokodilm • 6 years ago

"The problem is that a ‘const real’ is a different type than a normal ‘real’. If a function that takes a ‘real’ is fed a ‘const real’, it will return an error. It is easy to imagine how this can lead to problems with interoperability between codes." is not true.

Dan Elton • 6 years ago

Sorry, what I was thinking of is if you have FORTRAN code or python code niavely interacting with C code, you run into that type of problem. I haven't confirmed it, but I think you are right, you can feed both ways to a function.

In checking your statement I came across this, they note how Fortran does function passing memory optimization under the hood , while you have to do it expclitely in C. That's relevant for people who work with big arrays, which are ubiquitous in all physics based simulation.
http://duramecho.com/Comput...

strictly speaking... • 6 years ago

The C++ you show is not idiomatic C++. It is C. In a properly written C++ program, you would never need to manually free anything, you're supposed to let RAII destructors do it for you. Similarily, allocating should be done with constructors, or packed into functions.

I still do find Fortran to be much nicer for array code than C or C++ due to the nice column notation, but you're not really making a fair case for the competition.

Dan Elton • 6 years ago

OP here: I'm not a C++ coder so I've never heard of RAII. Thanks for your feedback

Sturla Molden • 2 years ago

This is actually the major argument against C++: You had never heard of RAII. C++ is simply to big and complex to learn, and therefore also too difficult to use. However, using C++ safely also involves learning to use software design patterns such as RAII. It is not sufficient to just learn the C++ language. With Fortran or C we do not have to worry about that. We do not have to study software design patterns such as RAII to write bug-free Fortran or C, but with C++ we do. If we want to give science or engineering students a programming tool they can learn to use within a reasonable period of time, it will not be C++. I am not sure if that tool is Fortran either, but it is definately not C++. Matlab or Python are generally much better alternatives for scientists and engineers, although they are not high-performance languages.

hizxlnce • 4 years ago

Most of the comments here appear to be written by computer specialists, rather than physicists.

I spent my entire career in chemistry and had the advantage of being one of the few chemists in the 60's who had command of a computer via Fortran (I even personally knew David Sayre who was on the original Fortran development team at IBM). For me, the main advantage of Fortran is that it allows someone who has little time to become a computer expert to do sophisticated mathematical modeling. As a chemist or physicist, one must focus on the specific science of that field and not on the details of whether a semicolon is in the right place or not etc. With a few simple commands (READ, WRITE, IF, DO, GO TO, CONTINUE, DIMENSION, COMMON, CHARACTER, RETURN, END, etc) one could plumb some very sophisticated software, without ever taking a course in computers. The low entry barrier and easy learning curve for the amateur is the main beauty of Fortran, and would explain spaghetti code to a casual viewer.

Amir Shahmoradi • 4 years ago

nice article. some of your Fortran examples can be written even more concisely, in particular, with the new features of Fortran 2003, 2008 and 2018.

Eisenhower303 • 4 years ago

All good points, but one specific reason that FORTRAN is used is because the code usually lacks pointers which means there isn’t any potential for pointer aliasing: https://devblogs.nvidia.com...

The possibility of pointer aliasing in C/C++ (at least in older standards) means that the compiler assumes it can happen, resulting in poorly optimized code. Of course good programmers would use the ‘restrict’ keyword which prevents this, but physicists were already discouraged from converting all their validated legacy code from FORTRAN to C/C++ in the first place.

Sturla Molden • 2 years ago

Pointer aliasing is not really a problem any more. Modern compilers have become very good at doing alias analysis. Modern CPUs have evolved to deal with this problem - hierarchical memory, advanced branch prediction, long pipelines. Finally, at least in C, we now have a “restrict” keyword to inform the compiler that a pointer is not aliased. However, the common experience is that it does not improve the performance. In C++ is typically exists as a compiler extension, though it is not a part of the language. What is clear though, is that C and C++ are nearly always equally fast or faster than Fortran, even without the use of “restrict”, so pointer aliasing cannot really matter anymore. It did matter on the hardware we used in 1985, but not in 2021.

This article can be summarized as: Because physicists are stupid and living in the 90s. All this article shows is that you don't know how to use the tools available to you, including google, or basic things like C and C++ are different. If you avoid C++ because you're tiny little brains can't handle it, just say so, but don't make up reasons based on decades old incorrect code that isnt even C++.

First of all, it confuses C and C++. Literally half a second of googling would have told them this, but since they obviously didn't bother researching anything, i'll tell them. C is not C++. They are completely different languages.

Second, quite literally every single code example is wrong. It is both bad C++, AND bad C. It's slow, inefficient, and likely to be buggy.

You don't use malloc in a loop like that. Even in C.

Most of the code is barely valid C++, and all of it is decades out of date and incorrect.

Plus in it's "C++ Requires" link, it links to C programming notes from 1999. Decades old. Modern C exists. C14 exists. It's outdated and incorrect.

To allocate an array like that in C++ is simple.

MatrixType<double> array;

Done. Easy. Everything else is handled automatically by the well written MatrixType. Presumably, since you don't know how to write C++ yourself, you're using a third party library to provide the type, of which C++ has many good libraries.

And, C++ doesnt use malloc and free. It uses new and delete.

This "article" is honestly a disgrace.

Nir • 6 years ago

I don't fully disagree with your comments, that physicists write very poor code, and aren't using better C++ techniques, etc.

However, your comments are way, way harsh, and just wrong. Physicists are not stupid; the average physicist is quite a bit smarter than the average programmer (source: I've been both). It's just that physicists are spending so much time absorbing other new knowledge, about their actual field and domain, that it can be hard to try to absorb more. I realize that this doesn't make sense, you can always say that instead of programming for 2 hours Thursday you should have spent 2 hours learning C++ so that the rest of the month you are more efficient. But it is just more difficult, both in actuality and psychologically, to spend those 2 hours learning C++ when you already spent 6 hours that day attending excruciatingly hard classes and reading excruciatingly hard papers.

Paul Graham has a pretty good essay: The Top Idea in Your Mind. He talks about how as soon as his startup or whatever went into fundraising mode, technical productivity dropped a lot. It wasn't because fundraising took so many raw hours. It's because, he would wake up and think primarily about fundraising, not solving technical problems. It's the same with physicists. Yes, it would make more sense to learn to program better. But it's just not the idea a the top of people's minds.

Gene • 4 years ago

I had to use Fortran + APL for masters thesis which was about creation of a mathematical & probability model & simulation of the model and graphics involving many millions of times of iterations, collecting results data. That was 30 yrs ago🤣 Before PCs started being widely used. Later working at F 500 companies and startups, I never used Fortran or APL.

Dan Elton • 6 years ago

Your post also touches on a problem I've observed among many in the CS community, which is often they think the methods they have are superior because "people with tiny brains can't handle this.."

Code should run fast, be easy to read, easy to learn, and easy to use. It shouldn't require a "big brain" and one being a CS genius to implement some physics equations into code.

Dan Elton • 6 years ago

OP here, well, all the times when using C++ (in workshops, from tutorials etc) people taught me to use malloc and free and generate arrays in a way similar to how I mentioned.

One of the issues with C++ is there are many different ways to write things, so it is harder to learn, different tutorials teach you different styles. I'm sorry if my code was outdated, bare in mind I never did much C++.

"All this article shows is that you don't know how to use the tools available to you, including google, or basic things like C and C++ are different. " - well that's a terribly dumb thing to assume about me. I took CS in college , I am well aware how different C and C++ are. I did research C++ for the stuff I was doing but decided it wasn't a good option. Yes you can argue C++ is a better language because it has object oriented capabilities, etc but for what I was doing it wasn't better. Also, Fortran 2003 has OOP. Also, obviously C++ is more marketable, but at the time I wasn't thinking about marketability.

Literally everyone I know in physics was of the opinion that Fortran was a better language than C++ for the work they were doing in terms of speed, ease of learning, and readability of complex sequences of mathematical operations on arrays. Given that fortran has largely been developed by physicists, that is somewhat of an inditement of the computer science community, which is always talking about making better tools. Yes there is a lot of stupidity involved as well, (which I mention in the article) for instance when Professors tell students to use fortran 77 instead of modern fortran, but honestly I don't think physicists use modern Fortran in general because they are stupid.

That said, my views have changed substantially from the time I wrote this article. Given how fast NumPy is now, I think it's better for physicists to use Python and then only write a Fortran subroutine when absolutely necessary, and call it from Python. Already you see such a trend.

"Everything else is handled automatically by the well written MatrixType." - OK, and I mentioned that libraries exist in my article. But there is no gold standard matrix library for C++ (equivalent to NumPy in python, for instance). Remember, transferability (between machines and each generation of grad students) is really important.

Keegan Jones • 6 years ago

I don't mean to sound like a dick, honestly - but that code is really, really dated. In C++ there are now things like std::vector and std::array - where the memory allocation is handled for you. In those cases, what you did (with a std::vector<double>) would be "std::vector<double> our_data(columns*rows);". That's it! This allocates and sets all elements to zero - doing "our_data.reserve(columns*rows)" allocates memory in the background, but if we try to index things will break.

C++ is a complex and very feature rich language, which is why its difficult: additionally, there is a lot of legacy knowledge out there that is hard to explain and correct (mostly because its so widespread). This is also why FORTRAN is so entrenched, and why it works for most scientists: they know it well, they know how to use it effectively, and they've had time to set it up for their needs and environment.

But modern C++ can match or beat the clarity of things, like complex operations on arrays. You could use something like std::transform to apply a set of mathematical operations to each element in an array - and in C++17, this may be automatically parallelized (using threads) or vectorized (using SIMD) to be faster. Its neat stuff! the "algorithms" library and "numeric" library has cool built-in functions: http://en.cppreference.com/... and http://en.cppreference.com/...

Also, to epitomize what I said about C++ being feature dense: I discovered a new thing writing this comment!
http://en.cppreference.com/...

A specialized array that has the strict aliasing feature from FORTRAN that lets the compiler go crazy with optimziations, along with convienience functions and the like. This is super neat, and I'm glad I found it and got to share it with you :D

I encourage you to watch Bjarne Stroustrups latest talk from cppcon 2017, if you can spare the time: https://www.youtube.com/wat...

I don't like the tone of comments that act all superior about C++ knowledge, or tear others down for using outdated C++ idioms and methods. Its not their fault, and it does nothing to convince them to use C++. And its just mean. I really enjoy C++ though, and articles like this pain me because I'm sad to find out your impressions of C++ are so negative! I hope that this comment maybe helps clarify things. And doesn't seem asinine. I agree that Python is a better fit - I work in an R&D environment (space stuff, no less, so I've seen my share of FORTRAN code) and its much easier for our scientists to get up and running with Python to get simple things done. Some modern C++ syntax though, like range-based for loops, are positively pythonic (re-using our vector name from earlier) : "for(auto& val : our_data)" iterates through each element in our_data in a nice simple-looking fashion

Cheers, mate!

Dan Elton • 6 years ago

I don't like any discussions of one language being 'better' than another and I really dislike arrogance of any kind. I actually feel my tone in this blog post (which is almost 3 years old, mind you) was a bit arrogant, which I find distasteful now, looking back. Languages are tools -- all I was trying to do in this post was explain why many people in physics still see modern Fortran as the best tool for the job. The culture gap was pretty large - people in physics spoke highly of Fortran, everyone else sneered when they heard it. People outside of physics think of punch cards, line numbers and all caps variable declarations when they think of Fortran.

The OOP capabilities and universality of C++ obviously make it superior for large coding projects like game development and operating systems. Yes Fortran 2003 has OOP but personally I've never used it or seen anyone use it. There are physics codes like GROMACS for molecular dynamics and many others that use C++. I know some people in physics who work on C++ code, but they are a small minority. Everyone uses Matlab, Fortran, or Python for everything.

I've read a bit about C++ and I always wanted to dig in and do a project in C++ but never had a good chance. It's just so much easier for me to use a language I know well like Fortran, Python or Matlab. I did C coding when I took CS 101 in college and the only time I used C++ was in two workshops I attended on parallel programming. I do almost all of my coding in Python now. Like I said , my 1st recommendation would be NumPy now. Newer comparisons show it to be basically just as fast as C++ or Fortran. The older comparisons you'll find (circa 2011 or so) are not as flattering.

Will watch your video later.

Keegan Jones • 6 years ago

Agree on not debating which language is best - its a silly argument to have, and that's not what I was going for in my post. Programming languages are just tools in our toolbox, and its helpful to know a few. You noted some excellent points - like how C++ really shines in larger projects. It's usually the scale of the project - not the speed, anymore - that decide which language I use. Python/C++ feel like two really great languages to know, in my opinion. FORTRAN is still good at what it does, and "modern" FORTRAN is incredibly far removed from the ages of punch-cards and nightmarish assembly-esque programming!

Without creating another reply, I hear what you're saying about Physics Ph.D's not getting to do physics research. I almost was headed down that path myself, before things got tossed all to hell. speaking of - why is this post garnering so much attention 3 years down the road (besides being posted on reddit)? Its unfortunate some of it is so negative - at least you're being a content creator, though, and taking the time to write posts AND respond to comments!

Dan Elton • 6 years ago

Also given how few physics Ph.D.s these days actually end up doing physics research, it's really important to think about marketability of skills outside physics.

FORTRAN is faster than C++: HTTPS://authors.Elsevier.co... Colin Paul Gloster, Comment on "Gamma-ray spectroscopy using angular distribution of Compton scattering" [Nucl. Instr. and Meth. A 1031 (2022) 166502], 1049:167923, 2023

yumadlh • 3 years ago

I need the download for Fortran compiler for HP Windows 10. Where is the download for this?

Marty Sullivan • 4 years ago

I think this article makes a lot of sense, in terms of choosing Fortran over C. However, all of the arguments are moot since NumPy solves all of the code and performance problems mentioned here. Python is a much better tool for writing algorithms for physicists and other scientists and Fortran is so much harder to learn and use for incoming students. Sooner rather than later, Fortran should begin to be phased into a "read-only" language in order to extract algorithms from historical code.

I completely disagree with the author's assertion that "Physicists generally have an understanding of how computers work and are inclined to think in terms of physical processes, such as the transfer of data from disk to RAM and from RAM to CPU cache." This is totally untrue; I know from experience that the majority of incoming students and faculty DO NOT have this understanding at all. They see anything but the science, especially the computer/information science concepts, as a barrier to their research.

Writing software needs to be looked at as an iterative process, not a "once and done" scenario. The first iteration at the front lines of science, with the physicists themselves, should be to write the algorithm out using Python. Then, if and when the algorithm is designed and actually needs to be scaled out for performance, a computer scientist can easily collaborate to write the algorithm out in C to be run on GPU or FPGA. Python can still be used by the physicist to interact with the GPU / FPGA code. Notice how Fortran is completely erased from the equation (as it should be).

I should mention that Matlab also is prevalent but I completely disagree with its use. With Matlab, Students and Faculty become locked into a platform that a) costs money and b) creates code that only a few can use because of the fact that it costs money and c) people outside of science/engineering fields do not use Matlab and never will. These three reasons make Matlab a bad platform for disseminating code that can be used and improved upon in the general open-source software community.

Using Python to develop algorithms is a huge win for science and the general public.

OTMPut • 2 years ago

The issue with Python is that if you do serious scientific computation, you need to know another language where executing loops are fast.
Anything in Python that requires respectable speed is written in C (e.g. Numpy) and called in Python. Why do I need to spend valuable time to vectorize a simple algorithm just so I can use a library written in C? If I cannot vectorize for a neat Numpy usage, I will have to go to C (or Fortran or Julia) to write nested loops.

I think Julia is a better modern alternative for scientific computation. You just need to know Julia. With Python, you will need to know a second powerful language like C to make things work.

Amir Shahmoradi • 4 years ago

Marty, this is a nice summary and good arguments. Here is my
experience with a few languages that I frequently interact with
(C/Fortran/MATLAB/Python/R/...). Regarding Fortran, modern Fortran
(2008, 2018) is an extremely expressive language for scientific
computation, with a syntax comparable to Python and MATLAB. No other
compiled language has those capabilities and expressiveness for
scientific computing, and no other compiled language that I know of, has
built-in native support for parallel one-sided inter-process
communications, which is theoretically independent of the underlying
architecture (whether GPU or CPU). Secondly, language comparisons are
always, in my experience, unfair and biased. Let me give you an example,
I teach Python, MATLAB, and R at school. I have noticed that when I
heavily invest my time in one particular language for teaching and
research, I find the other languages clunky and inferior. I have noticed
that this feeling is almost independent of the language being used
(C/Fortran/MATLAB/Python/R). In particular, that happens frequently
whenever I switch between MATLAB and Python. So, I always keep in mind
that language bias could be quite prevalent in such discussions.
Everybody loves the language they are familiar with, and so they tend to
vigorously defend it. MATLAB/Python/R can be too slow for certain
computational tasks. So I cannot rely on them for all of my teaching and
research tasks. But I cannot make every tiny compute-intensive bit of
my job dependent on a computer scientist who may or may not wish to help
my project at a particular time. Asking for computer scientists' help
may make sense for very large projects. But for small-medium projects
that also require deep field-expertise that can be quite costly,
essentially equivalent to training a student in both computer science
and the domain of the science of interest. It is a circular loop, in my
opinion.

Marty Sullivan • 4 years ago

I'd argue that even for small projects, NumPy / SciPy has anything you need performance-wise. There is no need for a compiled language or computer scientist because all of the matrix calculations are processed in compiled code.

I'm not familiar with the built-in interprocess communication in newer versions of Fortran. If it's using only CPUs to perform matrix calculations in an MPI-esque fashion, then it is not worth the time in even developing for it, both from an environmental and efficiency perspective. FPGA / GPU is the only modern answer for scaling these problems out for performance.

Lautaro Lobo • 4 years ago

Thanks! I'm doing some research on Fortran for a blog post and I've found this post and its comment section pretty useful :)

joemmm • 4 years ago

I agree for most part. Though PL1 also had many of these features in 1980 but it didn't survive. I began out with FOCAL and FORTRAN 4, and eventually newer Fortrans in the 1990a. NASTRAN is still mainly Fortran though some people, usually younger ones, are writing new sub routines in C++.

Oh, one reason to use negative indexes is for problems of calculating properties on a crystal lattice. Then one doesn't have to convert position to matrix indices.

But I am not doing a lot of large computing anymore. I would like to see FORTRAN as part of Visual Studio so i can quickly setup GUIs to speed up user inputs and have built in graphical outputs.
But i still can have efficient computations when i am trying to calculate the plasma distribution from an ion engine and gets the plasma property as a function of position.

Overall, this is a good article and i want to say learning Fortran is less time consuming than learning OOP languages.

Val Ryland • 4 years ago
In the base implementation of C++, merely copying an array requires cycling through all the elements with for loops or a call to a library function.


No, in C++ you'd use either std::array (when the size is known at compile time) or std::vector (when it isn't), both of which have copy constructors. std::vector even has a move constructor which allows a constant-time destructive copy operation for when the right-hand side won't be used anymore. It compiles to a pointer swap. C-style arrays are there mostly for backwards compatibility and should basically not be used in new code.

C++ requires the following code:

No, to allocate an array in C++ you'd write something like

std::vector<double> A(nrows * ncols);

Or better yet, you'd allocate some custom matrix type, which you either write yourself or can get a library to do it for you. The problems with your snippet are that 1. you're using malloc, which you should virtually never do. In C++ we have new which actually works correctly with C++ datatypes (unlike malloc) and is safer in various ways; 2. you're allocating non-contiguous memory, which is a cache nightmare (so a performance comparison involving this snippet would be decidedly unfair); and 3. you're actually manually allocating memory which is a horrible idea and definitely not the C++ way. In the vast majority of C++ code you'll _never_ have to write the words free or delete. For example, if you allocated an array as I indicated above, this is what the corresponding deallocation would look like:

That's right, you don't have to do anything. The lifetime of the vector is resolved at compile time and the freeing logic is generated automatically by the compiler. Corollary: it is possible to write whole C++ programs without a single pointer in them. Pointers are used a lot more sparingly than in C since you get to use things like library types that handle the allocations for you, and references, which obviate the need for pointers for parameter passing in many cases.