Skip to content
 
If you can read this, either the style sheet didn't load or you have an older browser that doesn't support style sheets. Try clearing your browser cache and refreshing the page.

(ZDNet)   Like Microsoft and Apple before them, Chrome developers discover that the vast majority of their more important security vulnerabilities stem from using "memory unsafe" programming languages like C. This is good news... for Rust   (zdnet.com) divider line
    More: Spiffy, Programming language, Web browser, memory management, C, serious security bugs, bug-prone C, Google engineers, Language  
•       •       •

988 clicks; posted to Geek » on 25 May 2020 at 8:17 PM (7 weeks ago)   |   Favorite    |   share:  Share on Twitter share via Email Share on Facebook



56 Comments     (+0 »)
 
View Voting Results: Smartest and Funniest


Oldest | « | 1 | 2 | » | Newest | Show all

 
2020-05-25 5:21:10 PM  
More from using memory unsafe programmers and work practices.

Ship early and often is not the way to make good products.
 
2020-05-25 6:18:42 PM  
Thats why I only program in BASIC and FORTRAN
 
2020-05-25 6:55:58 PM  
Fark user imageView Full Size
 
2020-05-25 8:23:41 PM  

Sliding Carp: More from using memory unsafe programmers and work practices.

Ship early and often is not the way to make good products.


Hear, hear.

/ I blame Agile.
// I always blame Agile.
 
2020-05-25 8:26:08 PM  
In the past few years a lot of very old and once bulletproof code has been reworked to mitigate exploits like Meltdown and Spectre. If this reworked code is blowing up its not the fault of the compiler, its the fault of shortcutting the design process to meet arbitrary completion dates that never take into consideration the expense associated with newly introduced flaws.
 
2020-05-25 8:29:53 PM  

fragMasterFlash: In the past few years a lot of very old and once bulletproof code has been reworked to mitigate exploits like Meltdown and Spectre. If this reworked code is blowing up its not the fault of the compiler, its the fault of shortcutting the design process to meet arbitrary completion dates that never take into consideration the expense associated with newly introduced flaws.


BUT MVP!!!
 
2020-05-25 8:35:04 PM  
C is not memory-unsafe. You just have to be a disciplined programmer.

Just like Java is not memory-safe. I'm dealing with several memory leaks in Java web applications right now. One memory leak is from a commercial library where the vendor refuses to fix or even acknowledge the problem.

Unfortunately, I can't get one of my customers to switch to a memory-safe (and free) alternative to the commercial library. Their reasons include:

1. We paid for it
2. Open source is always worse - because there is no one responsible for fixing an issue
3. We would have to change our code

Headdesk.
 
2020-05-25 8:42:06 PM  
Yeah, but you can *fix* memory errors because you don't design the program around them. If you write your program in Rust, you've implicitly committed to understanding this kind of farkery. And what's the point of that? Just write safe code!
res.cloudinary.comView Full Size
 
2020-05-25 8:47:39 PM  
Don't ship early and often? Write safe code to begin with? Be a disciplined programmer?

Guys, you've convinced me that Rust is the future. Because those things never happen anymore.

BTW, have they stopped making changes to the syntax? I stopped dinking with it because of that.
 
2020-05-25 9:07:10 PM  

GitOffaMyLawn: C is not memory-unsafe. You just have to be a disciplined programmer.

Just like Java is not memory-safe. I'm dealing with several memory leaks in Java web applications right now. One memory leak is from a commercial library where the vendor refuses to fix or even acknowledge the problem.

Unfortunately, I can't get one of my customers to switch to a memory-safe (and free) alternative to the commercial library. Their reasons include:

1. We paid for it
2. Open source is always worse - because there is no one responsible for fixing an issue
3. We would have to change our code

Headdesk.


C is like a sword, but it can chop trees too. You really have to understand your code in order to do it right.

Also subby, you forgot golang.
 
2020-05-25 9:07:48 PM  
And all it costs is a ton of machine code (speed&memory) for bounds checks at most usages, and the occasional multi-second stall while the garbage collector checks reference counting to see what memory it can reuse.  And moving objects around to compensate for fragmentation

Enjoy your real-time programming
 
2020-05-25 9:14:10 PM  
jjst use COBOL, duh
 
2020-05-25 9:18:57 PM  

Stibium: GitOffaMyLawn: C is not memory-unsafe. You just have to be a disciplined programmer.

Just like Java is not memory-safe. I'm dealing with several memory leaks in Java web applications right now. One memory leak is from a commercial library where the vendor refuses to fix or even acknowledge the problem.

Unfortunately, I can't get one of my customers to switch to a memory-safe (and free) alternative to the commercial library. Their reasons include:

1. We paid for it
2. Open source is always worse - because there is no one responsible for fixing an issue
3. We would have to change our code

Headdesk.

C is like a sword, but it can chop trees too. You really have to understand your code in order to do it right.

Also subby, you forgot golang.


If you don't understand your C code, it can bite you hard. I spent quite a while reworking some C code because the developer before me relied on a compiler quirk that initialized uninitialized variables. He then scattered his code with if(uninitialized_variable) and if(!uninitialized_variable).

When called on it, his reply was "well, it works for me."

I need to try golang. I'll just add it to my list of things to do.
 
2020-05-25 9:19:56 PM  
I'm always amused by the C programmers, who despite all available evidence, claim that it's possible to write large C code bases without screwing up memory management in ways that create critical security flaws.

If it were so easy, FAANG companies could afford coders who can manage it.

Around 70 percent of all the vulnerabilities in Microsoft products addressed through a security update each year are memory safety issues; a Microsoft engineer revealed last week at a security conference.
 
2020-05-25 9:22:14 PM  

Stibium: you forgot golang.


Garbage collected languages sacrifice the raw speed of C for memory safety.

Rust does not.
 
2020-05-25 9:22:36 PM  
I'm pretty sure the only thing that could convince me to adopt a new programming language is if it were substantially better at error handling. Rust's trying for it, but is it really any good?
 
2020-05-25 9:27:00 PM  

Unscratchable_Itch: Sliding Carp: More from using memory unsafe programmers and work practices.

Ship early and often is not the way to make good products.

Hear, hear.

/ I blame Agile.
// I always blame Agile.


I always blame Agile Product Owners and the Scrum Masters who enable them. They want the "cool" and "sexy" features out the door ahead of the "boring" stuff that would actually make things more secure. Poorly managed agile has turned development into ADHD factories.
 
2020-05-25 9:42:48 PM  

Another one of them varmits: Don't ship early and often? Write safe code to begin with? Be a disciplined programmer?

Guys, you've convinced me that Rust is the future. Because those things never happen anymore.

BTW, have they stopped making changes to the syntax? I stopped dinking with it because of that.


Then your competitor writes unsafe code without any discipline, allowing them to get to market that much cheaper and that much faster. And people buy it because the user doesn't understand or care how well the code is written as long as it sort of works for not a lot of money. Your competitor leverages their prime mover position to build momentum and shifts the money they didn't spend on doing a good job into marketing. If you've spent your time bringing a superior product to market they eventually just buy you out for a small fraction of the money you've sunk into your work.

The End.
 
2020-05-25 9:43:34 PM  

BullBearMS: I'm always amused by the C programmers, who despite all available evidence, claim that it's possible to write large C code bases without screwing up memory management in ways that create critical security flaws.

If it were so easy, FAANG companies could afford coders who can manage it.

Around 70 percent of all the vulnerabilities in Microsoft products addressed through a security update each year are memory safety issues; a Microsoft engineer revealed last week at a security conference.


Yeah, I agree.

Memory management is just too hard once the codebase gets big and complicated. And, frankly...*why* should "mere humans" still have to worry about memory management? It seems strange that after 40-odd years of "mass-market computing", we still haven't solved this fundamental problem. Except...we HAVE solved it. But too many programmers are still using C for things they shouldn't be using it for (everything except the lowest-level systems stuff).
 
2020-05-25 9:46:49 PM  
Chrome looks at my RAM like a pedophile driving by a public park.
 
2020-05-25 9:57:11 PM  
S.O.D. - Diamonds and rust
Youtube SbHacAWg3JQ
 
2020-05-25 10:08:28 PM  
It's not a memory leak.  When the OS eventually kills the process, you'll get all that memory back!
 
2020-05-25 10:09:36 PM  

BullBearMS: Garbage collected languages sacrifice the raw speed of C for memory safety.


They don't have to. For all its flaws, Objective-C made it easy to apply "Automatic Reference Counting" at compile time, which gave you all the benefits of garbage collection without any of its flaws.

But managing memory is hard. Anybody who's like, "just write good code," is a moron. With malloc/free or new/delete it's trivially easy to encounter real-world situations where the ownership of allocated memory is unclear, or more precisely, where it's difficult to communicate back to the allocator that it is now free to deallocate that memory. Garbage collection doesn't entirely solve this problem.

For a real fun, GC-based memory leak, I was attempting to use C#'s UdpClient and SendAsync to blast UDP packets to hosts. I had no idea if those hosts were up or not, and frankly didn't care. We do this from Linux/MacOS all the time- the packets to dead hosts just drop. For our application, that's fine. Windows, though, keeps the async task running for about 0.3 seconds before it gives up. But I'm sending 30 packets a second to 30 targets, so you can watch the memory just skyrocket by about a gig a minute.

I think forcing the socket to be non-blocking will fix it, but I'm not sure.
 
2020-05-25 10:34:40 PM  

Unscratchable_Itch: Sliding Carp: More from using memory unsafe programmers and work practices.

Ship early and often is not the way to make good products.

Hear, hear.

/ I blame Agile.
// I always blame Agile.


I Need Agile Methodology
Youtube nvks70PD0Rs
 
2020-05-25 10:35:37 PM  

GitOffaMyLawn: Stibium: GitOffaMyLawn: C is not memory-unsafe. You just have to be a disciplined programmer.

Just like Java is not memory-safe. I'm dealing with several memory leaks in Java web applications right now. One memory leak is from a commercial library where the vendor refuses to fix or even acknowledge the problem.

Unfortunately, I can't get one of my customers to switch to a memory-safe (and free) alternative to the commercial library. Their reasons include:

1. We paid for it
2. Open source is always worse - because there is no one responsible for fixing an issue
3. We would have to change our code

Headdesk.

C is like a sword, but it can chop trees too. You really have to understand your code in order to do it right.

Also subby, you forgot golang.

If you don't understand your C code, it can bite you hard. I spent quite a while reworking some C code because the developer before me relied on a compiler quirk that initialized uninitialized variables. He then scattered his code with if(uninitialized_variable) and if(!uninitialized_variable).

When called on it, his reply was "well, it works for me."

I need to try golang. I'll just add it to my list of things to do.


Wow, that seemed to be a rather intentional sabotage. That's one of the low hanging fruit. If a variable doesn't immediately get initialized to some value... it immediately gets initialized to some value. I like to the idea of nil in golang. If it's nil, you can know it's uninitialized. At least that's how I think it works. I love how it can return several values.

It's a natural evolution of C. It lets you do so many advanced C things in a safe and repeatable manner.
 
2020-05-25 10:36:41 PM  

BullBearMS: Stibium: you forgot golang.

Garbage collected languages sacrifice the raw speed of C for memory safety.

Rust does not.


It's compiler dependent at this point.

Crystal is another intriguing language.
 
2020-05-25 10:42:39 PM  

RyansPrivates: Unscratchable_Itch: Sliding Carp: More from using memory unsafe programmers and work practices.

Ship early and often is not the way to make good products.

Hear, hear.

/ I blame Agile.
// I always blame Agile.

I always blame Agile Product Owners and the Scrum Masters who enable them. They want the "cool" and "sexy" features out the door ahead of the "boring" stuff that would actually make things more secure. Poorly managed agile has turned development into ADHD factories.


Indeed.
 
2020-05-25 10:44:51 PM  

t3knomanser: BullBearMS: Garbage collected languages sacrifice the raw speed of C for memory safety.

They don't have to. For all its flaws, Objective-C made it easy to apply "Automatic Reference Counting" at compile time, which gave you all the benefits of garbage collection without any of its flaws.

But managing memory is hard. Anybody who's like, "just write good code," is a moron. With malloc/free or new/delete it's trivially easy to encounter real-world situations where the ownership of allocated memory is unclear, or more precisely, where it's difficult to communicate back to the allocator that it is now free to deallocate that memory. Garbage collection doesn't entirely solve this problem.

For a real fun, GC-based memory leak, I was attempting to use C#'s UdpClient and SendAsync to blast UDP packets to hosts. I had no idea if those hosts were up or not, and frankly didn't care. We do this from Linux/MacOS all the time- the packets to dead hosts just drop. For our application, that's fine. Windows, though, keeps the async task running for about 0.3 seconds before it gives up. But I'm sending 30 packets a second to 30 targets, so you can watch the memory just skyrocket by about a gig a minute.

I think forcing the socket to be non-blocking will fix it, but I'm not sure.


Absolutely.

"Just stop making mistakes" is no solution and other long standing professions have figured that out.

I mostly think it's due to computer programming's origins with a slew of overcompensating douches who thought they were so much smarter than anyone else. Hell you still see it.

For them mistakes are personal failure of other people who don't belong. They're mental giants who never screw up.
 
2020-05-25 10:50:02 PM  
Rust never sleeps
 
2020-05-25 11:04:20 PM  
So when I google Rust, and see it has raw pointers, how am I not to assume it is unsafe in the hands of undisciplined programmers too?
 
2020-05-25 11:05:11 PM  
C is not the problem. The lack of programmers understanding basic fundamentals of computing is the problem. C and pointers are "too hard" for the average Joe, so we have to lower the bar by creating other languages to work around their inabilities.
 
2020-05-25 11:16:26 PM  
Fark user imageView Full Size
 
2020-05-25 11:17:14 PM  

t3knomanser: BullBearMS: Garbage collected languages sacrifice the raw speed of C for memory safety.

They don't have to. For all its flaws, Objective-C made it easy to apply "Automatic Reference Counting" at compile time, which gave you all the benefits of garbage collection without any of its flaws.

But managing memory is hard. Anybody who's like, "just write good code," is a moron. With malloc/free or new/delete it's trivially easy to encounter real-world situations where the ownership of allocated memory is unclear, or more precisely, where it's difficult to communicate back to the allocator that it is now free to deallocate that memory. Garbage collection doesn't entirely solve this problem.

For a real fun, GC-based memory leak, I was attempting to use C#'s UdpClient and SendAsync to blast UDP packets to hosts. I had no idea if those hosts were up or not, and frankly didn't care. We do this from Linux/MacOS all the time- the packets to dead hosts just drop. For our application, that's fine. Windows, though, keeps the async task running for about 0.3 seconds before it gives up. But I'm sending 30 packets a second to 30 targets, so you can watch the memory just skyrocket by about a gig a minute.

I think forcing the socket to be non-blocking will fix it, but I'm not sure.


I really don't understand why it is a difficult problem. If you allocate memory, you have to free it. If you don't, the local complexity spirals to infinity and you run out of memory.

The compiler should generate code to prune off that sort of spiral. If it crashes, so be it. Fix the code.
 
2020-05-26 12:30:56 AM  

t3knomanser: BullBearMS: Garbage collected languages sacrifice the raw speed of C for memory safety.

They don't have to. For all its flaws, Objective-C made it easy to apply "Automatic Reference Counting" at compile time, which gave you all the benefits of garbage collection without any of its flaws.

But managing memory is hard. Anybody who's like, "just write good code," is a moron. With malloc/free or new/delete it's trivially easy to encounter real-world situations where the ownership of allocated memory is unclear, or more precisely, where it's difficult to communicate back to the allocator that it is now free to deallocate that memory. Garbage collection doesn't entirely solve this problem.

For a real fun, GC-based memory leak, I was attempting to use C#'s UdpClient and SendAsync to blast UDP packets to hosts. I had no idea if those hosts were up or not, and frankly didn't care. We do this from Linux/MacOS all the time- the packets to dead hosts just drop. For our application, that's fine. Windows, though, keeps the async task running for about 0.3 seconds before it gives up. But I'm sending 30 packets a second to 30 targets, so you can watch the memory just skyrocket by about a gig a minute.

I think forcing the socket to be non-blocking will fix it, but I'm not sure.


One of my recent setups/progs found me using SignalR in what I think is similar to what you describe. Your signalR "server" just sits there and any connected clients run in kind of an app domain.  Clients run the desired "server" function and catch the response via registered event. There's way more to it than this but it is what I did in lieu to blasting udp packets between programs/systems.  And there have been no memory problems.
 
2020-05-26 12:54:42 AM  

Vlad_the_Inaner: So when I google Rust, and see it has raw pointers, how am I not to assume it is unsafe in the hands of undisciplined programmers too?


You're not supposed to use them I assume.

Rust touts it's smart pointers and other safety features around them. that's what I assume is the big draw (along with raw speed due to compiling into assembly in a manner close to C)
 
2020-05-26 12:54:54 AM  

Short Victoria's War: It's not a memory leak.  When the OS eventually kills the process, you'll get all that memory back!


I LOL'd, thanks!

:D

/needed it, too.
 
2020-05-26 2:17:09 AM  

Sliding Carp: More from using memory unsafe programmers and work practices.


SO very much this. Anyone blaming C is basically lying.
 
2020-05-26 2:53:37 AM  

Stibium: Wow, that seemed to be a rather intentional sabotage. That's one of the low hanging fruit. If a variable doesn't immediately get initialized to some value... it immediately gets initialized to some value.


Nah, DEC compiler versus GCC. DEC was trying to be "helpful". Since the original author developed on a DEC microVAX running Ultrix, he figured all C compilers were so helpful.

When I built and ran this on a Sparc 5 with GCC . . . oops.

I'm showing my age, aren't I.
 
2020-05-26 3:59:44 AM  

cman: Thats why I only program in BASIC and FORTRAN


Haskell from orbit is the only way to be sure.
 
2020-05-26 6:23:53 AM  

UberDave: One of my recent setups/progs found me using SignalR in what I think is similar to what you describe.


SignalR is cool, but not quite appropriate for our use case. We have 30 embedded single-board computers that are all running a service called LEDScape, which lets them drive LED panels. Our C# app is the client, not the server, and it generates frame data then blasts packets of RGB data to each one of the LEDScape instances. We use UDP because, from a Linux host (our usual deployment model), if one of those services goes down, we don't care- we just toss packets out into the void. No big deal. This time, because we need Unity multitouch, we need to deploy to Windows (Unity's multitouch features don't work on Linux, and no, we're not going to put a Mac into the kiosk, even though we're all developing on Macs).

Stibium: If you allocate memory, you have to free it. If you don't, the local complexity spirals to infinity and you run out of memory.


If a module in my code allocates memory and gives a pointer to that memory to someone else, how does the originating allocating module know when to free it? It needs the consumer to tell it. If there's only one module consuming the data stored in this block of memory, that's not too big a deal. But what happens when multiple parts of this application need to share that block of memory? How do I know when to free it? Without implementing some kind of reference counting system, it's impossible to know. But now that I'm implementing reference counting, I have to worry about thread safety. I have to worry about some of those consumers misbehaving and perhaps doing multiple releases of their reference, which means my counter is inaccurate and I don't know that, which creates the use-after-free problems that cause so many bugs. Worse, any consumer has the pointer and can free the memory if they desire. They shouldn't, but if I'm the allocator, I have no control over that. And what if one of those modules enters an error state and doesn't ever release its reference? And let's also keep in mind that implementing reference counting isn't the purpose of my program- its boilerplate bullshiat I have to write because it's required but has nothing to do with whatever the actual purpose of my application is.

In a simple application, managing memory is simple. In a complex application, managing memory is hard. Anyone telling you "oh, it's so easy," is lying.
 
2020-05-26 6:34:52 AM  

realmolo: BullBearMS: I'm always amused by the C programmers, who despite all available evidence, claim that it's possible to write large C code bases without screwing up memory management in ways that create critical security flaws.

If it were so easy, FAANG companies could afford coders who can manage it.

Around 70 percent of all the vulnerabilities in Microsoft products addressed through a security update each year are memory safety issues; a Microsoft engineer revealed last week at a security conference.

Yeah, I agree.

Memory management is just too hard once the codebase gets big and complicated. And, frankly...*why* should "mere humans" still have to worry about memory management? It seems strange that after 40-odd years of "mass-market computing", we still haven't solved this fundamental problem. Except...we HAVE solved it. But too many programmers are still using C for things they shouldn't be using it for (everything except the lowest-level systems stuff).


For new development? Sure. The problem isn't that engineers choose C for new projects over a safer language, the problem is there are massive, massive legacy codebases written purely in C that are the end result of decades of development, and these codebases are nigh impossible to modernize.
 
2020-05-26 6:41:45 AM  

qorkfiend: The problem isn't that engineers choose C for new projects over a safer language, the problem is there are massive, massive legacy codebases written purely in C that are the end result of decades of development, and these codebases are nigh impossible to modernize.


There are also domains where the C toolchain is the mature one. If you want to fight with it, you can get Rust running on an AVR microcontroller, but it's a pain in the ass and the tooling isn't all that great. Getting C running on an AVR is easy.
 
2020-05-26 8:21:11 AM  

Stibium: . I like to the idea of nil in golang. If it's nil, you can know it's uninitialized. At least that's how I think it works.


Not really.  In Go if you don't initialize a variable it takes the "zero value", which depends on type.  For a pointer it is indeed nil.  For a number it's 0.  For a string it's the empty string.  The bizarre but useful one for me is that slices (lists) are the empty list but requires no memory.  Essentially nil == the empty list for slices, which is odd coming from other languages.
 
2020-05-26 8:51:53 AM  
Oh, but you are right that in Go an uninitialized variable can't take on arbitrary values of whatever was in that RAM spot like C.  I've been using higher level languages so long I forgot about that 'feature'.
 
2020-05-26 9:41:41 AM  

Esc7: t3knomanser: BullBearMS: Garbage collected languages sacrifice the raw speed of C for memory safety.

They don't have to. For all its flaws, Objective-C made it easy to apply "Automatic Reference Counting" at compile time, which gave you all the benefits of garbage collection without any of its flaws.

But managing memory is hard. Anybody who's like, "just write good code," is a moron. With malloc/free or new/delete it's trivially easy to encounter real-world situations where the ownership of allocated memory is unclear, or more precisely, where it's difficult to communicate back to the allocator that it is now free to deallocate that memory. Garbage collection doesn't entirely solve this problem.

For a real fun, GC-based memory leak, I was attempting to use C#'s UdpClient and SendAsync to blast UDP packets to hosts. I had no idea if those hosts were up or not, and frankly didn't care. We do this from Linux/MacOS all the time- the packets to dead hosts just drop. For our application, that's fine. Windows, though, keeps the async task running for about 0.3 seconds before it gives up. But I'm sending 30 packets a second to 30 targets, so you can watch the memory just skyrocket by about a gig a minute.

I think forcing the socket to be non-blocking will fix it, but I'm not sure.

Absolutely.

"Just stop making mistakes" is no solution and other long standing professions have figured that out.

I mostly think it's due to computer programming's origins with a slew of overcompensating douches who thought they were so much smarter than anyone else. Hell you still see it.

For them mistakes are personal failure of other people who don't belong. They're mental giants who never screw up.


Look at all the Real Genius "languages don't kill people, people do" apologists in this thread who can't admit that languages have flaws and think the answer is "just don't write bugs"!
 
2020-05-26 10:24:23 AM  

Ambitwistor: Esc7: t3knomanser: BullBearMS: Garbage collected languages sacrifice the raw speed of C for memory safety.

They don't have to. For all its flaws, Objective-C made it easy to apply "Automatic Reference Counting" at compile time, which gave you all the benefits of garbage collection without any of its flaws.

But managing memory is hard. Anybody who's like, "just write good code," is a moron. With malloc/free or new/delete it's trivially easy to encounter real-world situations where the ownership of allocated memory is unclear, or more precisely, where it's difficult to communicate back to the allocator that it is now free to deallocate that memory. Garbage collection doesn't entirely solve this problem.

For a real fun, GC-based memory leak, I was attempting to use C#'s UdpClient and SendAsync to blast UDP packets to hosts. I had no idea if those hosts were up or not, and frankly didn't care. We do this from Linux/MacOS all the time- the packets to dead hosts just drop. For our application, that's fine. Windows, though, keeps the async task running for about 0.3 seconds before it gives up. But I'm sending 30 packets a second to 30 targets, so you can watch the memory just skyrocket by about a gig a minute.

I think forcing the socket to be non-blocking will fix it, but I'm not sure.

Absolutely.

"Just stop making mistakes" is no solution and other long standing professions have figured that out.

I mostly think it's due to computer programming's origins with a slew of overcompensating douches who thought they were so much smarter than anyone else. Hell you still see it.

For them mistakes are personal failure of other people who don't belong. They're mental giants who never screw up.

Look at all the Real Genius "languages don't kill people, people do" apologists in this thread who can't admit that languages have flaws and think the answer is "just don't write bugs"!


Unlike all those coders at the biggest tech companies in the industry, I am very Smrt!

Here's the Microsoft security team explaining how Rust is like C.

When thinking about why Rust is a good alternative, it's good to think about what we can't afford to give up by switching from C or C++ - namely performance and control. Rust, just like C and C++ has a minimal and optional "runtime". Rust's standard library depends on libc for platforms that support it just like C and C++, but the standard library is also optional so running on platforms without an operating system is also possible.

Rust, just like C and C++, also gives the programmer fine-grained control on when and how much memory is allocated allowing the programmer to have a very good idea of exactly how the program will perform every time it is run. What this means for performance in terms of raw speed, control, and predictability, is that Rust, C, and C++ can be thought of in similar terms.

And how Rust is an improvement:

What separates Rust from C and C++ is its strong safety guarantees. Unless explicitly opted-out of through usage of the "unsafe" keyword, Rust is completely memory safe, meaning that the issues we illustrated in the previous post are impossible to express. In a future post, we'll revisit those examples to see how Rust prevents those issues usually without adding any runtime overhead.

Rust statically enforces many properties of a program beyond memory safety, including null pointer safety and data race safety (i.e., no unsynchronized access of a piece of memory from two or more threads).

Safe concurrency is getting to be a bigger and bigger deal all the time in a world where even the cheapest computers have many cores.  It's arguably just as important as memory safety.
 
2020-05-26 10:56:20 AM  

Nora Gretz: I'm pretty sure the only thing that could convince me to adopt a new programming language is if it were substantially better at error handling. Rust's trying for it, but is it really any good?


Yes.  Rust's Result types combine generics and enums to make generating and catching errors more straightforward than crappy old result codes or constructs like exceptions that are prone to messing up control flow.  It's a big improvement.
 
2020-05-26 11:02:03 AM  

covfefe: Yeah, but you can *fix* memory errors because you don't design the program around them. If you write your program in Rust, you've implicitly committed to understanding this kind of farkery. And what's the point of that? Just write safe code!
[res.cloudinary.com image 850x357]


Pretty sure this one got fixed when they merged non-lexical lifetimes.
 
2020-05-26 11:04:18 AM  

qorkfiend: realmolo: BullBearMS: I'm always amused by the C programmers, who despite all available evidence, claim that it's possible to write large C code bases without screwing up memory management in ways that create critical security flaws.

If it were so easy, FAANG companies could afford coders who can manage it.

Around 70 percent of all the vulnerabilities in Microsoft products addressed through a security update each year are memory safety issues; a Microsoft engineer revealed last week at a security conference.

Yeah, I agree.

Memory management is just too hard once the codebase gets big and complicated. And, frankly...*why* should "mere humans" still have to worry about memory management? It seems strange that after 40-odd years of "mass-market computing", we still haven't solved this fundamental problem. Except...we HAVE solved it. But too many programmers are still using C for things they shouldn't be using it for (everything except the lowest-level systems stuff).

For new development? Sure. The problem isn't that engineers choose C for new projects over a safer language, the problem is there are massive, massive legacy codebases written purely in C that are the end result of decades of development, and these codebases are nigh impossible to modernize.


Knock yourself out: https://c2rust.com/
 
2020-05-26 2:20:10 PM  
back in the '80s, on an IBM mainframe, you weren't allowed to have access to memory that wasn't yours.  If you tried, the OS terminated your program.
 
Displayed 50 of 56 comments


Oldest | « | 1 | 2 | » | Newest | Show all


View Voting Results: Smartest and Funniest

This thread is closed to new comments.

Continue Farking





On Twitter




In Other Media
  1. Links are submitted by members of the Fark community.

  2. When community members submit a link, they also write a custom headline for the story.

  3. Other Farkers comment on the links. This is the number of comments. Click here to read them.

  4. Click here to submit a link.