Skip to content
 
If you can read this, either the style sheet didn't load or you have an older browser that doesn't support style sheets. Try clearing your browser cache and refreshing the page.

(Medium)   Did Schnorr just break RSA? There are many factors to consider   (sweis.medium.com) divider line
    More: Giggity, Cryptography, recent paper, Claus P. Schnorr, correctness of the paper, factor 400-bit moduli, significant improvements, Fast Factoring Integers, Schnorr's paper claims  
•       •       •

1164 clicks; posted to STEM » on 04 Mar 2021 at 12:41 AM (9 weeks ago)   |   Favorite    |   share:  Share on Twitter share via Email Share on Facebook



19 Comments     (+0 »)
View Voting Results: Smartest and Funniest
 
2021-03-03 6:20:10 PM  
The claims are huge, the proof of the proof is as yet absent. Somebody smort will take a whack at an implementation soon, I'm sure.
 
2021-03-03 7:00:32 PM  
Was the scary tag hacked, subby?

Breaking RSA as described (reducing 800 bit encryption to 36 bits of work) would be very, very bad news for ever making another financial transaction online.

I guess it was always inevitable but I figured RSA was secure until quantum computers grew up and had babies.
 
2021-03-03 7:04:20 PM  

OptionC: Was the scary tag hacked, subby?


I couldn't settle on one, but you're right about the gravity of consequence if this actually works and it's not just a smart old man's brain declining into oatmeal.
 
2021-03-03 7:39:02 PM  
My money is still in my accounts, so it hasn't been cracked yet.
 
2021-03-04 1:02:12 AM  
Considering the paper is 2 years old and no demonstration of the factorization has occurred, let's just say I find it unlikely.

I do think there are optimizations to be made in factorization problems, but these are pretty well understood, well enough we have lots of known problems and benchmarks. If you had a working algorithm for faster factoring, it should be trivial to code and prove against some of these known problems.
 
2021-03-04 1:04:38 AM  
He looks fine to me.

Fark user imageView Full Size
 
2021-03-04 2:07:44 AM  

Tom Marvolo Bombadil: He looks fine to me.

[Fark user image image 600x360]


That's what they used to say about ODB
 
2021-03-04 2:13:37 AM  

OptionC: Was the scary tag hacked, subby?

Breaking RSA as described (reducing 800 bit encryption to 36 bits of work) would be very, very bad news for ever making another financial transaction online.

I guess it was always inevitable but I figured RSA was secure until quantum computers grew up and had babies.


ALL transactions are online transactions now. Yes even the card reader at your kitch coffee shop you love. It requires an internet connection and executes the charge the exact same way a websites checkout process does.
 
2021-03-04 2:22:24 AM  

emtwo: Tom Marvolo Bombadil: He looks fine to me.

[Fark user image image 600x360]

That's what they used to say about ODB


Did they though?
 
2021-03-04 3:52:16 AM  

Quantumbunny: Considering the paper is 2 years old and no demonstration of the factorization has occurred, let's just say I find it unlikely.

I do think there are optimizations to be made in factorization problems, but these are pretty well understood, well enough we have lots of known problems and benchmarks. If you had a working algorithm for faster factoring, it should be trivial to code and prove against some of these known problems.


There are two possible reasons for this as I see it:

a. It's wrong or a hoax (more probable)
b. Analogue of how the Laundry operates whenever someone gets too close to "Art of Computer Programming vol III" territory is in effect to suppress it.

Bear in mind, he's claiming a speed-up of somewhat over ten orders of magnitude (from 2700 core-years to a handful of core-seconds) in factoring efficiency.
 
2021-03-04 7:23:46 AM  
Wasn't this about a massive speed up in factoring keys made with bad primes?  I thought this led to some new tests that reject some numbers that had been assumed to be primes using most other tests.

RSA has always had a known weakness if it isn't using primes.
 
2021-03-04 8:17:52 AM  
I wouldn't be surprised if the NSA can already do this.

If they can, it's probably since 2013, the time of the Snowden revelations, because if it was commonly done back then it probably would have been buried somewhere in the documents that Snowden released.  To my knowledge, it wasn't, and if it was something they could commonly do they wouldn't go to all the trouble to use both software and hardware side-channel attacks.

And I would caution against saying things like "Well, it would take X thousand years to break", because we have historical examples of that very logic falling flat on its face numerous times.  Perhaps the most well-known example being the Enigma machine during WWII.  The Germans knew the Enigma wasn't unbreakable.  They knew it had weaknesses.  They just didn't think about an automated, massively parallel attack against it.*

Just because we don't have an example of how to break this kind of encryption available in the open literature doesn't mean that one doesn't exist and isn't being exploited as we speak.

*Although by the last few months of the war they started acting like they knew it was being read by the Allies.
 
2021-03-04 8:47:04 AM  

Quantumbunny: Considering the paper is 2 years old and no demonstration of the factorization has occurred, let's just say I find it unlikely.

I do think there are optimizations to be made in factorization problems, but these are pretty well understood, well enough we have lots of known problems and benchmarks. If you had a working algorithm for faster factoring, it should be trivial to code and prove against some of these known problems.


2 years? This is regarding pre-print papers submitted over the past couple of days. That being said, the community seems to be leaning towards "not gonna work".
 
2021-03-04 9:08:39 AM  

incendi: Quantumbunny: Considering the paper is 2 years old and no demonstration of the factorization has occurred, let's just say I find it unlikely.

I do think there are optimizations to be made in factorization problems, but these are pretty well understood, well enough we have lots of known problems and benchmarks. If you had a working algorithm for faster factoring, it should be trivial to code and prove against some of these known problems.

2 years? This is regarding pre-print papers submitted over the past couple of days. That being said, the community seems to be leaning towards "not gonna work".


It's literally in the article that this paper has been circulating for 2 years.

Not sure what to tell you if the article contains bad information.
 
2021-03-04 9:19:47 AM  

Quantumbunny: incendi: Quantumbunny: Considering the paper is 2 years old and no demonstration of the factorization has occurred, let's just say I find it unlikely.

I do think there are optimizations to be made in factorization problems, but these are pretty well understood, well enough we have lots of known problems and benchmarks. If you had a working algorithm for faster factoring, it should be trivial to code and prove against some of these known problems.

2 years? This is regarding pre-print papers submitted over the past couple of days. That being said, the community seems to be leaning towards "not gonna work".

It's literally in the article that this paper has been circulating for 2 years.

Not sure what to tell you if the article contains bad information.


I'm not sure what the article is trying to say.

The paper in question was recently submitted for publication:

https://eprint.iacr.org/2021/232

Date: received 1 Mar 2021, last revised 3 Mar 2021

Then there is this:

This paper has been circulating for at least 2 years and so far, the nearest implementation is this "Factoring Integers via Lattice Algorithms" code.

Unless Schnorr circulated the paper for two years for comment before formally submitting it for publishing, both of them can't be true.

Also.....

i.imgur.comView Full Size


DID SOMEONE CALL ME SCHNORRER?
 
2021-03-04 10:31:35 AM  
Eh, this was hacked back in the first episode of Agents of SHIELD and nothing bad has happened since then.
 
2021-03-04 12:37:36 PM  
 
2021-03-04 4:06:41 PM  

incendi: Oatmeal it is:

Sophie, an extremely serious person @SchmiegSophie There is a mistake in the proof given, where a factor of n! is dropped, changing the runtime complexity 12:22 PM · Mar 4, 2021


Gee, it's missing an n!? That's a pretty huge miss. A Big O of n! is an absolutely terrible algorithmic complexity.

I think we can write this one off.
 
2021-03-04 5:28:10 PM  

incendi: Oatmeal it is:

Sophie, an extremely serious person @SchmiegSophie There is a mistake in the proof given, where a factor of n! is dropped, changing the runtime complexity 12:22 PM · Mar 4, 2021


Good news everyone!  This revolutionary mathematical proof is wrong and civilization is safe!

/ only slightly kidding
 
Displayed 19 of 19 comments

View Voting Results: Smartest and Funniest

This thread is closed to new comments.

Continue Farking





On Twitter



  1. Links are submitted by members of the Fark community.

  2. When community members submit a link, they also write a custom headline for the story.

  3. Other Farkers comment on the links. This is the number of comments. Click here to read them.

  4. Click here to submit a link.