program story

할당 된 메모리에서 free ()를 사용하지 * 않아도 * 괜찮습니까?

inputbox 2020. 9. 21. 07:40
반응형

할당 된 메모리에서 free ()를 사용하지 * 않아도 * 괜찮습니까?


저는 컴퓨터 공학을 공부하고 있으며 전자 과정도 있습니다. 나는 사용하지 않도록하는 것이 가능하다는 것을 (이 과정의) 교수님의 두에서 듣고 free()(후 기능 malloc(), calloc()등)을 다시 사용하지 않을 가능성이 할당 된 메모리 공간이 다른 메모리를 할당 할 수 있기 때문이다. 즉, 예를 들어 4 바이트를 할당 한 다음 해제하면 다시 할당되지 않을 4 바이트의 공간이 생깁니다 . 구멍이 생깁니다 .

나는 그것이 미쳤다고 생각한다 : 당신은 그것을 해제하지 않고 힙에 메모리를 할당하는 장난감아닌 프로그램을 가질 수 없다 . 그러나 나는이 각각 너무 중요 이유를 정확하게 설명 할 수있는 지식이없는 malloc()이이어야합니다 free().

그래서 : 사용 malloc()하지 않고 사용하는 것이 적절할 수있는 상황이 free()있습니까? 그렇지 않다면 교수들에게 어떻게 설명 할 수 있습니까?


쉬움 : 심각하지 않은 거의 모든 malloc()/free()구현 의 소스를 읽으십시오 . 이것은 호출 작업을 처리하는 실제 메모리 관리자의미 합니다. 이는 런타임 라이브러리, 가상 머신 또는 운영 체제에있을 수 있습니다. 물론 모든 경우에 코드에 똑같이 액세스 할 수있는 것은 아닙니다.

인접한 구멍을 더 큰 구멍에 결합하여 메모리가 조각화되지 않도록하는 것은 매우 일반적입니다. 더 심각한 할당자는이를 보장하기 위해 더 심각한 기술을 사용합니다.

따라서 세 가지 할당 및 할당 해제를 수행하고 다음 순서로 메모리에 블록을 배치한다고 가정 해 보겠습니다.

+-+-+-+
|A|B|C|
+-+-+-+

개별 할당의 크기는 중요하지 않습니다. 그런 다음 첫 번째와 마지막 A와 C를 해제합니다.

+-+-+-+
| |B| |
+-+-+-+

마침내 B를 해제하면 (처음에는 적어도 이론적으로) 다음과 같은 결과를 얻게됩니다.

+-+-+-+
| | | |
+-+-+-+

조각 모음을 통해

+-+-+-+
|     |
+-+-+-+

즉, 하나의 더 큰 여유 블록, 남은 조각이 없습니다.

요청 된 참조 :


다른 답변은 이미 설명 완벽하게 그것의 실제 구현 malloc()free()큰 무료 덩어리로 실제로 유착 (defragmnent) 구멍을한다. 그러나 그것이 사실이 아니더라도, 포기하는 것은 여전히 ​​나쁜 생각 일 것입니다 free().

문제는 프로그램이 4 바이트의 메모리를 방금 할당하고 해제하려고한다는 것입니다. 오랜 시간 동안 실행될 경우 4 바이트의 메모리 만 다시 할당해야 할 가능성이 높습니다. 따라서 이러한 4 바이트가 더 큰 연속 공간으로 합쳐지지 않더라도 프로그램 자체에서 다시 사용할 수 있습니다.


예를 들어에는 많은 다른 구현이 malloc있으며 일부는 Doug Lea 또는 이것 처럼 힙을 더 효율적으로 만들려고 시도합니다 .


혹시 교수님이 POSIX와 협력하고 있습니까? OS의 여가 한 번에 전체 힙을 자유롭게 - 그들은 내가 너무 나쁘지 않을 것이다이 방법을 상상할 수있는 시나리오입니다 그 작은, 최소한의 쉘 응용 프로그램을 많이 작성하는 데 사용하는 경우 입니다 을 확보보다 더 빨리가 천 변수. 애플리케이션이 1 ~ 2 초 동안 실행될 것으로 예상하면 할당 해제없이 쉽게 벗어날 수 있습니다.

물론 여전히 나쁜 관행입니다 (성능 향상은 항상 모호한 직감이 아닌 프로파일 링을 기반으로해야 함), 다른 제약 사항을 설명하지 않고 학생들에게 말해야하는 것은 아니지만 많은 작은 파이핑 쉘을 상상할 수 있습니다 -이 방식으로 작성 될 애플리케이션 (정적 할당을 완전히 사용하지 않는 경우). 변수를 해제하지 않음으로써 이익을 얻는 작업을하고 있다면 극도로 낮은 지연 조건에서 작업하고 있거나 (이 경우 동적 할당과 C ++를 감당할 수있는 방법은 무엇입니까? : D) (단일 메모리 블록이 아닌 1000 개의 정수를 차례로 할당하여 정수 배열을 할당하는 것과 같이) 매우, 매우 잘못된 일을합니다.


당신은 그들이 전자 공학 교수라고 말했다. 펌웨어 / 실시간 소프트웨어를 작성하는 데 사용될 수 있으며 실행이 자주 필요한 시간을 정확하게 측정 할 수있었습니다. 이러한 경우 모든 할당에 충분한 메모리가 있다는 것을 알고 메모리를 해제하거나 재할 당하지 않으면 실행 시간에 대해 더 쉽게 계산 된 제한이 제공 될 수 있습니다.

일부 체계에서는 하드웨어 메모리 보호를 사용하여 루틴이 할당 된 메모리에서 완료되는지 확인하거나 매우 예외적 인 경우에 트랩을 생성 할 수도 있습니다 .


이것을 이전의 논평자 및 답변과 다른 각도에서 취하면, 교수님이 메모리가 정적으로 할당 된 시스템 (예 : 프로그램이 컴파일 될 때)에 대한 경험이 있었을 가능성이 있습니다.

정적 할당은 다음과 같은 작업을 수행 할 때 발생합니다.

define MAX_SIZE 32
int array[MAX_SIZE];

In many real-time and embedded systems(those most likely to be encountered by EEs or CEs), it is usually preferable to avoid dynamic memory allocation altogether. So, uses of malloc, new, and their deletion counterparts are rare. On top of that, memory in computers has exploded in recent years.

If you have 512 MB available to you, and you statically allocate 1 MB, you have roughly 511 MB to trundle through before your software explodes(well, not exactly...but go with me here). Assuming you have 511 MB to abuse, if you malloc 4 bytes every second without freeing them, you will be able to run for nearly 73 hours before you run out of memory. Considering many machines shut off once a day, that means your program will never run out of memory!

In the above example, the leak is 4 bytes per second, or 240 bytes/min. Now imagine that you lower that byte/min ratio. The lower that ratio, the longer your program can run without problems. If your mallocs are infrequent, that is a real possibility.

Heck, if you know you're only going to malloc something once, and that malloc will never be hit again, then it's a lot like static allocation, though you don't need to know the size of what it is you're allocating up-front. Eg: Let's say we have 512 MB again. We need to malloc 32 arrays of integers. These are typical integers - 4 bytes each. We know the sizes of these arrays will never exceed 1024 integers. No other memory allocations occur in our program. Do we have enough memory? 32 * 1024 * 4 = 131,072. 128 KB - so yes. We have plenty of space. If we know we will never allocate any more memory, we can safely malloc those arrays without freeing them. However, this may also mean that you have to restart the machine/device if your program crashes. If you start/stop your program 4,096 times you'll allocate all 512 MB. If you have zombie processes it's possible that memory will never be freed, even after a crash.

Save yourself pain and misery, and consume this mantra as The One Truth: malloc should always be associated with a free. new should always have a delete.


I think the claim stated in the question is nonsense if taken literally from the programmer's standpoint, but it has truth (at least some) from the operating system's view.

malloc() will eventually end up calling either mmap() or sbrk() which will fetch a page from the OS.

In any non-trivial program, the chances that this page is ever given back to the OS during a processes lifetime are very small, even if you free() most of the allocated memory. So free()'d memory will only available to the same process most of the time, but not to others.


Your professors aren't wrong, but also are (they are at least misleading or oversimplifying). Memory fragmentation causes problems for performance and efficient use of memory, so sometimes you do have to consider it and take action to avoid it. One classic trick is, if you allocate a lot of things which are the same size, grabbing a pool of memory at startup which is some multiple of that size and managing its usage entirely internally, thus ensuring you don't have fragmentation happening at the OS level (and the holes in your internal memory mapper will be exactly the right size for the next object of that type which comes along).

There are entire third-party libraries which do nothing but handle that kind of thing for you, and sometimes it's the difference between acceptable performance and something that runs far too slowly. malloc() and free() take a noticeable amount of time to execute, which you'll start to notice if you're calling them a lot.

So by avoiding just naively using malloc() and free() you can avoid both fragmentation and performance problems - but when you get right down to it, you should always make sure you free() everything you malloc() unless you have a very good reason to do otherwise. Even when using an internal memory pool a good application will free() the pool memory before it exits. Yes, the OS will clean it up, but if the application lifecycle is later changed it'd be easy to forget that pool's still hanging around...

Long-running applications of course need to be utterly scrupulous about cleaning up or recycling everything they've allocated, or they end up running out of memory.


Your professors are raising an important point. Unfortunately the English usage is such that I'm not absolutely sure what it is they said. Let me answer the question in terms of non-toy programs that have certain memory usage characteristics, and that I have personally worked with.

Some programs behave nicely. They allocate memory in waves: lots of small or medium-sized allocations followed by lots of frees, in repeating cycles. In these programs typical memory allocators do rather well. They coalesce freed blocks and at the end of a wave most of the free memory is in large contiguous chunks. These programs are quite rare.

Most programs behave badly. They allocate and deallocate memory more or less randomly, in a variety of sizes from very small to very large, and they retain a high usage of allocated blocks. In these programs the ability to coalesce blocks is limited and over time they finish up with the memory highly fragmented and relatively non-contiguous. If the total memory usage exceeds about 1.5GB in a 32-bit memory space, and there are allocations of (say) 10MB or more, eventually one of the large allocations will fail. These programs are common.

Other programs free little or no memory, until they stop. They progressively allocate memory while running, freeing only small quantities, and then stop, at which time all memory is freed. A compiler is like this. So is a VM. For example, the .NET CLR runtime, itself written in C++, probably never frees any memory. Why should it?

And that is the final answer. In those cases where the program is sufficiently heavy in memory usage, then managing memory using malloc and free is not a sufficient answer to the problem. Unless you are lucky enough to be dealing with a well-behaved program, you will need to design one or more custom memory allocators that pre-allocate big chunks of memory and then sub-allocate according to a strategy of your choice. You may not use free at all, except when the program stops.

Without knowing exactly what your professors said, for truly production scale programs I would probably come out on their side.

EDIT

I'll have one go at answering some of the criticisms. Obviously SO is not a good place for posts of this kind. Just to be clear: I have around 30 years experience writing this kind of software, including a couple of compilers. I have no academic references, just my own bruises. I can't help feeling the criticisms come from people with far narrower and shorter experience.

I'll repeat my key message: balancing malloc and free is not a sufficient solution to large scale memory allocation in real programs. Block coalescing is normal, and buys time, but it's not enough. You need serious, clever memory allocators, which tend to grab memory in chunks (using malloc or whatever) and free rarely. This is probably the message OP's professors had in mind, which he misunderstood.


I'm surprised that nobody had quoted The Book yet:

This may not be true eventually, because memories may get large enough so that it would be impossible to run out of free memory in the lifetime of the computer. For example, there are about 3 ⋅ 1013 microseconds in a year, so if we were to cons once per microsecond we would need about 1015 cells of memory to build a machine that could operate for 30 years without running out of memory. That much memory seems absurdly large by today’s standards, but it is not physically impossible. On the other hand, processors are getting faster and a future computer may have large numbers of processors operating in parallel on a single memory, so it may be possible to use up memory much faster than we have postulated.

http://sarabander.github.io/sicp/html/5_002e3.xhtml#FOOT298

So, indeed, many programs can do just fine without ever bothering to free any memory.


I know about one case when explicitly freeing memory is worse than useless. That is, when you need all your data until the end of process lifetime. In other words, when freeing them is only possible right before program termination. Since any modern OS takes care freeing memory when a program dies, calling free() is not necessary in that case. In fact, it may slow down program termination, since it may need to access several pages in memory.

참고URL : https://stackoverflow.com/questions/22481134/is-it-ever-ok-to-not-use-free-on-allocated-memory

반응형