어느 것이 더 빠릅니까 : while (1) 또는 while (2)?
이것은 선임 관리자가 묻는 인터뷰 질문이었습니다.
어느 것이 더 빠릅니까?
while(1) {
// Some code
}
또는
while(2) {
//Some code
}
내부 표현식 while
이 마침내 true
또는로 평가되어야 하므로 둘 다 실행 속도가 동일하다고 말했습니다 false
. 이 경우 둘 다 평가되며 true
조건 내부에 추가 조건부 명령이 없습니다 while
. 따라서 둘 다 동일한 실행 속도를 가지며 (1) 동안 선호합니다.
그러나 면접관은 자신있게 말했습니다. "기본 사항을 확인하십시오. while(1)
이보다 빠릅니다 while(2)
." (그는 내 자신감을 시험하지 않았습니다)
이것이 사실입니까?
참조 : "for (;;)"가 "while (TRUE)"보다 빠릅니까? 그렇지 않다면 사람들은 왜 그것을 사용합니까?
두 루프 모두 무한하지만 반복 당 더 많은 명령 / 리소스를 사용하는 루프를 확인할 수 있습니다.
gcc를 사용하여 다음 두 프로그램을 다양한 최적화 수준에서 어셈블리로 컴파일했습니다.
int main(void) {
while(1) {}
return 0;
}
int main(void) {
while(2) {}
return 0;
}
최적화 ( -O0
) 없이도 생성 된 어셈블리는 두 프로그램 모두에서 동일했습니다 . 따라서 두 루프간에 속도 차이가 없습니다.
참고로 다음은 생성 된 어셈블리 ( gcc main.c -S -masm=intel
최적화 플래그와 함께 사용 )입니다.
와 함께 -O0
:
.file "main.c"
.intel_syntax noprefix
.def __main; .scl 2; .type 32; .endef
.text
.globl main
.def main; .scl 2; .type 32; .endef
.seh_proc main
main:
push rbp
.seh_pushreg rbp
mov rbp, rsp
.seh_setframe rbp, 0
sub rsp, 32
.seh_stackalloc 32
.seh_endprologue
call __main
.L2:
jmp .L2
.seh_endproc
.ident "GCC: (tdm64-2) 4.8.1"
와 함께 -O1
:
.file "main.c"
.intel_syntax noprefix
.def __main; .scl 2; .type 32; .endef
.text
.globl main
.def main; .scl 2; .type 32; .endef
.seh_proc main
main:
sub rsp, 40
.seh_stackalloc 40
.seh_endprologue
call __main
.L2:
jmp .L2
.seh_endproc
.ident "GCC: (tdm64-2) 4.8.1"
부착 -O2
하고 -O3
(동일한 출력) :
.file "main.c"
.intel_syntax noprefix
.def __main; .scl 2; .type 32; .endef
.section .text.startup,"x"
.p2align 4,,15
.globl main
.def main; .scl 2; .type 32; .endef
.seh_proc main
main:
sub rsp, 40
.seh_stackalloc 40
.seh_endprologue
call __main
.L2:
jmp .L2
.seh_endproc
.ident "GCC: (tdm64-2) 4.8.1"
실제로 루프에 대해 생성 된 어셈블리는 모든 최적화 수준에서 동일합니다.
.L2:
jmp .L2
.seh_endproc
.ident "GCC: (tdm64-2) 4.8.1"
중요한 부분은 다음과 같습니다.
.L2:
jmp .L2
어셈블리를 잘 읽을 수는 없지만 이것은 분명히 무조건적인 루프입니다. jmp
명령은 무조건에 프로그램 다시 재설정 .L2
도 사실에 대해 값을 비교하지 않고 라벨을, 프로그램이 어떻게 든 종료 될 때까지 물론 즉시 그래서 다시 않습니다. 이것은 C / C ++ 코드에 직접 해당합니다.
L2:
goto L2;
편집하다:
흥미롭게 도 최적화 없이도 다음 루프는 모두 jmp
어셈블리에서 정확히 동일한 출력 (무조건적 )을 생성했습니다 .
while(42) {}
while(1==1) {}
while(2==2) {}
while(4<7) {}
while(3==3 && 4==4) {}
while(8-9 < 0) {}
while(4.3 * 3e4 >= 2 << 6) {}
while(-0.1 + 02) {}
그리고 놀랍게도 :
#include<math.h>
while(sqrt(7)) {}
while(hypot(3,4)) {}
사용자 정의 함수를 사용하면 상황이 조금 더 흥미로워집니다.
int x(void) {
return 1;
}
while(x()) {}
#include<math.h>
double x(void) {
return sqrt(7);
}
while(x()) {}
에서 -O0
,이 두 가지 예는 실제로 호출 x
하고 각 반복에 대한 비교를 수행합니다.
첫 번째 예 (1 반환) :
.L4:
call x
testl %eax, %eax
jne .L4
movl $0, %eax
addq $32, %rsp
popq %rbp
ret
.seh_endproc
.ident "GCC: (tdm64-2) 4.8.1"
두 번째 예 (반환 sqrt(7)
) :
.L4:
call x
xorpd %xmm1, %xmm1
ucomisd %xmm1, %xmm0
jp .L4
xorpd %xmm1, %xmm1
ucomisd %xmm1, %xmm0
jne .L4
movl $0, %eax
addq $32, %rsp
popq %rbp
ret
.seh_endproc
.ident "GCC: (tdm64-2) 4.8.1"
그러나 -O1
그 이상에서는 둘 다 이전 예제와 동일한 어셈블리를 생성합니다 ( jmp
이전 레이블로 무조건 다시).
TL; DR
GCC에서는 서로 다른 루프가 동일한 어셈블리로 컴파일됩니다. 컴파일러는 상수 값을 평가하고 실제 비교를 수행하지 않습니다.
이야기의 교훈은 다음과 같습니다.
- C ++ 소스 코드와 CPU 명령어간에 변환 계층이 있으며이 계층은 성능에 중요한 영향을 미칩니다.
- 따라서 소스 코드 만보고 성능을 평가할 수 없습니다.
- 컴파일러 는 이러한 사소한 경우를 최적화 할 수 있을만큼 똑똑 해야합니다 . 프로그래머 는 대부분의 경우에 대해 생각하는 데 시간을 낭비 해서는 안됩니다 .
예, while(1)
보다 훨씬 빠르기 때문에 while(2)
, 인간이 읽을! 내가 볼 경우 while(1)
생소한 코드베이스에, 나는 즉시 저자의 의도를 알고, 나의 시선은 다음 행으로 계속할 수 있습니다.
만약 내가 본다면 while(2)
, 나는 아마 내 트랙에서 멈추고 저자가 왜 글을 쓰지 않았는지 알아 내려고 노력할 것이다 while(1)
. 저자의 손가락이 키보드에서 미끄러 졌습니까? 이 코드베이스의 관리자는 while(n)
루프를 다르게 보이게하기 위해 모호한 주석 메커니즘으로 사용합니까? 고장난 정적 분석 도구의 잘못된 경고에 대한 조잡한 해결 방법입니까? 아니면 이것이 내가 생성 된 코드를 읽고 있다는 단서입니까? 부주의 한 모두 찾기 및 바꾸기로 인한 버그입니까, 잘못된 병합 또는 우주선입니까? 아마도이 코드 줄은 극적으로 다른 작업을 수행해야합니다. 아마 읽기로되어 있었다 while(w)
나 while(x2)
. 파일 기록에서 작성자를 찾아 "WTF"이메일을 보내는 것이 좋습니다. 이제 정신적 맥락을 깨뜨 렸습니다.while(2)
while(1)
몇 분의 1 초가 걸렸을 것입니다!
나는 과장하지만 조금만. 코드 가독성은 정말 중요합니다. 그리고 그것은 인터뷰에서 언급 할 가치가 있습니다!
특정 옵션 집합을 사용하여 특정 대상에 대해 특정 컴파일러가 생성 한 코드를 보여주는 기존 답변은 해당 특정 컨텍스트 ( "x86_64 용 gcc 4.7.2를 사용하는 것이 더 빠릅니다. 예를 들어 기본 옵션으로? ").
언어 정의에 관한 한, 추상 기계 while (1)
에서 정수 상수 1
를 while (2)
평가하고 정수 상수를 평가합니다 2
. 두 경우 모두 결과가 0과 같은지 비교됩니다. 언어 표준은 두 구조의 상대적 성능에 대해 전혀 언급하지 않습니다.
극도로 순진한 컴파일러가 적어도 최적화를 요청하지 않고 컴파일 할 때 두 가지 형식에 대해 다른 기계 코드를 생성 할 수 있다고 상상할 수 있습니다.
반면에 C 컴파일러 는 상수식이 필요한 컨텍스트에 나타날 때 컴파일 타임에 일부 상수 식을 절대적으로 평가해야합니다 . 예를 들면 다음과 같습니다.
int n = 4;
switch (n) {
case 2+2: break;
case 4: break;
}
진단이 필요합니다. 게으른 컴파일러에는 2+2
실행 시간까지 평가를 연기하는 옵션이 없습니다 . 컴파일러는 컴파일 타임에 상수 표현식을 평가할 수 있어야하므로 필요하지 않은 경우에도 해당 기능을 활용하지 않을 이유가 없습니다.
C 표준 ( N1570 6.8.5p4)에 따르면
반복문 은 제어 표현식이 0과 같을 때까지 루프 본문 이라는 명령문이 반복적으로 실행되도록합니다.
따라서 관련 상수 식은 1 == 0
및 2 == 0
이며 둘 다 int
값으로 평가됩니다 0
. (이러한 비교는 while
루프 의 의미에 내재되어 있으며 실제 C 표현식으로 존재하지 않습니다.)
A perversely naive compiler could generate different code for the two constructs. For example, for the first it could generate an unconditional infinite loop (treating 1
as a special case), and for the second it could generate an explicit run-time comparison equivalent to 2 != 0
. But I've never encountered a C compiler that would actually behave that way, and I seriously doubt that such a compiler exists.
Most compilers (I'm tempted to say all production-quality compilers) have options to request additional optimizations. Under such an option, it's even less likely that any compiler would generate different code for the two forms.
If your compiler generates different code for the two constructs, first check whether the differing code sequences actually have different performance. If they do, try compiling again with an optimization option (if available). If they still differ, submit a bug report to the compiler vendor. It's not (necessarily) a bug in the sense of a failure to conform to the C standard, but it's almost certainly a problem that should be corrected.
Bottom line: while (1)
and while(2)
almost certainly have the same performance. They have exactly the same semantics, and there's no good reason for any compiler not to generate identical code.
And though it's perfectly legal for a compiler to generate faster code for while(1)
than for while(2)
, it's equally legal for a compiler to generate faster code for while(1)
than for another occurrence of while(1)
in the same program.
(There's another question implicit in the one you asked: How do you deal with an interviewer who insists on an incorrect technical point. That would probably be a good question for the Workplace site).
Wait a minute. The interviewer, did he look like this guy?
It's bad enough that the interviewer himself has failed this interview, what if other programmers at this company have "passed" this test?
No. Evaluating the statements 1 == 0
and 2 == 0
should be equally fast. We could imagine poor compiler implementations where one might be faster than the other. But there's no good reason why one should be faster than the other.
Even if there's some obscure circumstance when the claim would be true, programmers should not be evaluated based on knowledge of obscure (and in this case, creepy) trivia. Don't worry about this interview, the best move here is to walk away.
Disclaimer: This is NOT an original Dilbert cartoon. This is merely a mashup.
Your explanation is correct. This seems to be a question that tests your self-confidence in addition to technical knowledge.
By the way, if you answered
Both pieces of code are equally fast, because both take infinite time to complete
the interviewer would say
But
while (1)
can do more iterations per second; can you explain why? (this is nonsense; testing your confidence again)
So by answering like you did, you saved some time which you would otherwise waste on discussing this bad question.
Here is an example code generated by the compiler on my system (MS Visual Studio 2012), with optimizations turned off:
yyy:
xor eax, eax
cmp eax, 1 (or 2, depending on your code)
je xxx
jmp yyy
xxx:
...
With optimizations turned on:
xxx:
jmp xxx
So the generated code is exactly the same, at least with an optimizing compiler.
The most likely explanation for the question is that the interviewer thinks that the processor checks the individual bits of the numbers, one by one, until it hits a non-zero value:
1 = 00000001
2 = 00000010
If the "is zero?" algorithm starts from the right side of the number and has to check each bit until it reaches a non-zero bit, the while(1) { }
loop would have to check twice as many bits per iteration as the while(2) { }
loop.
This requires a very wrong mental model of how computers work, but it does have its own internal logic. One way to check would be to ask if while(-1) { }
or while(3) { }
would be equally fast, or if while(32) { }
would be even slower.
Of course I do not know the real intentions of this manager, but I propose a completely different view: When hiring a new member into a team, it is useful to know how he reacts to conflict situations.
They drove you into conflict. If this is true, they are clever and the question was good. For some industries, like banking, posting your problem to Stack Overflow could be a reason for rejection.
But of course I do not know, I just propose one option.
I think the clue is to be found in "asked by a senior manager". This person obviously stopped programming when he became a manager and then it took him/her several years to become a senior manager. Never lost interest in programming, but never wrote a line since those days. So his reference is not "any decent compiler out there" as some answers mention, but "the compiler this person worked with 20-30 years ago".
At that time, programmers spent a considerable percentage of their time trying out various methods for making their code faster and more efficient as CPU time of 'the central minicomputer' was so valueable. As did people writing compilers. I'm guessing that the one-and-only compiler his company made available at that time optimized on the basis of 'frequently encountered statements that can be optimized' and took a bit of a shortcut when encountering a while(1) and evaluated everything else, including a while(2). Having had such an experience could explain his position and his confidence in it.
The best approach to get you hired is probably one that enables the senior manager to get carried away and lecture you 2-3 minutes on "the good old days of programming" before YOU smoothly lead him towards the next interview subject. (Good timing is important here - too fast and you're interrupting the story - too slow and you are labelled as somebody with insufficient focus). Do tell him at the end of the interview that you'd be highly interested to learn more about this topic.
You should have asked him how did he reached to that conclusion. Under any decent compiler out there, the two compile to the same asm instructions. So, he should have told you the compiler as well to start off. And even so, you would have to know the compiler and platform very well to even make a theoretical educated guess. And in the end, it doesn't really matter in practice, since there are other external factors like memory fragmentation or system load that will influence the loop more than this detail.
For the sake of this question, I should that add I remember Doug Gwyn from C Committee writing that some early C compilers without the optimizer pass would generate a test in assembly for the while(1)
(comparing to for(;;)
which wouldn't have it).
I would answer to the interviewer by giving this historical note and then say that even if I would be very surprised any compiler did this, a compiler could have:
- without optimizer pass the compiler generate a test for both
while(1)
andwhile(2)
- with optimizer pass the compiler is instructed to optimize (with an unconditional jump) all
while(1)
because they are considered as idiomatic. This would leave thewhile(2)
with a test and therefore makes a performance difference between the two.
I would of course add to the interviewer that not considering while(1)
and while(2)
the same construct is a sign of low-quality optimization as these are equivalent constructs.
Another take on such a question would be to see if you got courage to tell your manager that he/she is wrong! And how softly you can communicate it.
My first instinct would have been to generate assembly output to show the manager that any decent compiler should take care of it, and if it's not doing so, you will submit the next patch for it :)
To see so many people delve into this problem, shows exactly why this could very well be a test to see how quickly you want to micro-optimize things.
My answer would be; it doesn't matter that much, I rather focus on the business problem which we are solving. After all, that's what I'm going to be paid for.
Moreover, I would opt for while(1) {}
because it is more common, and other teammates would not need to spend time to figure out why someone would go for a higher number than 1.
Now go write some code. ;-)
If you're that worried about optimisation, you should use
for (;;)
because that has no tests. (cynic mode)
It seems to me this is one of those behavioral interview questions masked as a technical question. Some companies do this - they will ask a technical question that should be fairly easy for any competent programmer to answer, but when the interviewee gives the correct answer, the interviewer will tell them they are wrong.
The company wants to see how you will react in this situation. Do you sit there quietly and don't push that your answer is correct, due to either self-doubt or fear of upsetting the interviewer? Or are you willing to challenge a person in authority who you know is wrong? They want to see if you are willing to stand up for your convictions, and if you can do it in a tactful and respectful manner.
I used to program C and Assembly code back when this sort of nonsense might have made a difference. When it did make a difference we wrote it in Assembly.
If I were asked that question I would have repeated Donald Knuth's famous 1974 quote about premature optimization and walked if the interviewer didn't laugh and move on.
Maybe the interviewer posed such dumb question intentionally and wanted you to make 3 points:
- Basic reasoning. Both loops are infinite, it's hard to talk about performance.
- Knowledge about optimisation levels. He wanted to hear from you if you let the compiler do any optimisation for you, it would optimise the condition, especially if the block was not empty.
- Knowledge about microprocessor architecture. Most architectures have a special CPU instruction for comparision with 0 (while not necessarily faster).
Here's a problem: If you actually write a program and measure its speed, the speed of both loops could be different! For some reasonable comparison:
unsigned long i = 0;
while (1) { if (++i == 1000000000) break; }
unsigned long i = 0;
while (2) { if (++i == 1000000000) break; }
with some code added that prints the time, some random effect like how the loop is positioned within one or two cache lines could make a difference. One loop might by pure chance be completely within one cache line, or at the start of a cache line, or it might to straddle two cache lines. And as a result, whatever the interviewer claims is fastest might actually be fastest - by coincidence.
Worst case scenario: An optimising compiler doesn't figure out what the loop does, but figures out that the values produced when the second loop is executed are the same ones as produced by the first one. And generate full code for the first loop, but not for the second.
They are both equal - the same.
According to the specifications anything that is not 0 is considered true, so even without any optimization, and a good compiler will not generate any code for while(1) or while(2). The compiler would generate a simple check for != 0
.
Judging by the amount of time and effort people have spent testing, proving, and answering this very straight forward question Id say that both were made very slow by asking the question.
And so to spend even more time on it ...
"while (2)" is ridiculous, because,
"while (1)", and "while (true)" are historically used to make an infinite loop which expects "break" to be called at some stage inside the loop based upon a condition that will certainly occur.
The "1" is simply there to always evaluate to true and therefore, to say "while (2)" is about as silly as saying "while (1 + 1 == 2)" which will also evaluate to true.
And if you want to be completely silly just use: -
while (1 + 5 - 2 - (1 * 3) == 0.5 - 4 + ((9 * 2) / 4)) {
if (succeed())
break;
}
Id like to think your coder made a typo which did not effect the running of the code, but if he intentionally used the "2" just to be weird then sack him before he puts weird sh!t all through your code making it difficult to read and work with.
That depends on the compiler.
If it optimizes the code, or if it evaluates 1 and 2 to true with the same number of instructions for a particular instruction set, the execution speed will be the same.
In real cases it will always be equally fast, but it would be possible to imagine a particular compiler and a particular system when this would be evaluated differently.
I mean: this is not really a language (C) related question.
Since people looking to answer this question want the fastest loop, I would have answered that both are equally compiling into the same assembly code, as stated in the other answers. Nevertheless you can suggest to the interviewer using 'loop unrolling'; a do {} while loop instead of the while loop.
Cautious: You need to ensure that the loop would at least always run once.
The loop should have a break condition inside.
Also for that kind of loop I would personally prefer the use of do {} while(42) since any integer, except 0, would do the job.
The obvious answer is: as posted, both fragments would run an equally busy infinite loop, which makes the program infinitely slow.
Although redefining C keywords as macros would technically have undefined behavior, it is the only way I can think of to make either code fragment fast at all: you can add this line above the 2 fragments:
#define while(x) sleep(x);
it will indeed make while(1)
twice as fast (or half as slow) as while(2)
.
The only reason I can think of why the while(2)
would be any slower is:
The code optimizes the loop to
cmp eax, 2
When the subtract occurs you're essentially subtracting
a.
00000000 - 00000010 cmp eax, 2
instead of
b.
00000000 - 00000001 cmp eax, 1
cmp
only sets flags and does not set a result. So on the least significant bits we know if we need to borrow or not with b. Whereas with a you have to perform two subtractions before you get a borrow.
참고URL : https://stackoverflow.com/questions/24848359/which-is-faster-while1-or-while2
'program story' 카테고리의 다른 글
태그는 Git의 브랜치와 어떻게 다릅니 까? (0) | 2020.10.03 |
---|---|
.gitignore에서 무시되는 특정 파일을 표시하는 Git 명령 (0) | 2020.10.03 |
Node.js에서 POST 데이터를 처리하는 방법은 무엇입니까? (0) | 2020.10.03 |
기본 / 내장 앱을 사용하지 않고 JavaMail API를 사용하여 Android에서 이메일 보내기 (0) | 2020.10.02 |
PHP에서 JavaScript로 변수와 데이터를 어떻게 전달합니까? (0) | 2020.10.02 |