We use cookies to ensure you have the best browsing experience on our website. Please read our cookie policy for more information about how we use cookies.
@Yesh_20's solution was a bit hard to understand, but I get it now. I'm chipping mine in. What I did is as follows:
Define a recursive function lpf (for largest prime factor). It takes two arguments: a number n, and a lower bound under which n has no factors.
Loop on i. Starting at that lower bound, go up to n. When i divides n, do n = n / i until nolonger divisible. This ensures that no integer less than or equal to i divides n.
Now, the largest prime factor is either equal to the prime number i or still hidden inside n. i must be prime prime, since, before getting to i, we have ensured that no integer smaller than i would divide n. If iis not prime, one of its factors would have divided n before we could get to i.
Since n has no prime factor smaller or equal to i, we can run it through lpf again with the lower bound being the last i value. The largest prime factor would then be the larger number between i and lpf(n after repeated division by i).
Just in case n itself is prime, we make lpf(n) return n when we find no i dividing n after iterating from lower bound to upper bound.
Compared to the solution Yeash described, my code is better in that:
When a factor (i) is found, instead of dividing once I divide untile n is nolonger divisible by i.
I have only one loop whereas he has two (after recursion, I continue from where I left off last time)
It is forged in one piece instead of being a patchwork of optimizations on an inefficient algorithm.
His main advantige over me is this: He skips every second number instead of testing each i for n % i == 0. Although I expect his code to be much, much less efficient than mine for large inputs, I think that skipping every second number might make his solution much better than mine over small inputs. And real-world inputs are ofter small.
Instead of iterating i in range(lower_bound, sqrt(n)), using a Erastothenes sieve(yeah that name's a mouthfull) to iterate through primes between the lower bound and the square root might be more efficient for large numbers, as we can skip a lot of n%i==0 tests for the composite number in between. I'm not sure how, but I think the prime factors we've already found can also get into the sieve and make things faster.
Any thoughts?
Cookie support is required to access HackerRank
Seems like cookies are disabled on this browser, please enable them to open this website
Project Euler #3: Largest prime factor
You are viewing a single comment's thread. Return to all comments →
@Yesh_20's solution was a bit hard to understand, but I get it now. I'm chipping mine in. What I did is as follows:
n
has no factors.i
. Starting at that lower bound, go up ton
. Wheni
dividesn
, don = n / i
until nolonger divisible. This ensures that no integer less than or equal toi
dividesn
.i
or still hidden inside n.i
must be prime prime, since, before getting toi
, we have ensured that no integer smaller than i would dividen
. Ifi
is not prime, one of its factors would have dividedn
before we could get toi
.n
has no prime factor smaller or equal toi
, we can run it through lpf again with the lower bound being the lasti
value. The largest prime factor would then be the larger number betweeni
andlpf(n after repeated division by i)
.n
itself is prime, we makelpf(n)
returnn
when we find noi
dividingn
after iterating from lower bound to upper bound.Compared to the solution Yeash described, my code is better in that:
His main advantige over me is this: He skips every second number instead of testing each
i
forn % i == 0
. Although I expect his code to be much, much less efficient than mine for large inputs, I think that skipping every second number might make his solution much better than mine over small inputs. And real-world inputs are ofter small.Instead of iterating
i in range(lower_bound, sqrt(n))
, using a Erastothenes sieve(yeah that name's a mouthfull) to iterate through primes between the lower bound and the square root might be more efficient for large numbers, as we can skip a lot ofn%i==0
tests for the composite number in between. I'm not sure how, but I think the prime factors we've already found can also get into the sieve and make things faster.Any thoughts?