首頁 > 軟體

golang程序在docker中OOM後hang住問題解析

2022-10-23 18:01:22

正文

golang版本:1.16

背景:golang程序在docker中執行,因為使用記憶體較多,經常在記憶體未達到docker上限時,就被oom-kill,為了避免程式頻繁被殺,在docker啟動時禁用了oom-kill,但是出現了新的問題。

現象:docker記憶體用滿後,golang程序hang住,無任何響應(沒有額外記憶體系統無法分配新的fd,無法服務),即使在程式內建了記憶體達到上限就重啟,也不會生效,只能kill

因為pprof檢視程序記憶體有很多是能在gc時釋放的,起初懷疑是golang程序問題

在hang住之前,先登入到docker上,寫一個golang測試程式,只申請一小段記憶體後sleep,啟動時加GODEBUG=GCTRACE=1列印gc資訊,發現mark 階段stw耗時達到31s(31823+15+0.11 ms對應STW Mark Prepare,Concurrent Marking,STW Mark Termination)

懷疑是不是申請記憶體失敗後,沒有觸發oom退出。在golang標準庫中檢視oom相關的邏輯

mgcwork.go:374

if s == nil {
   systemstack(func() {
      s = mheap_.allocManual(workbufAlloc/pageSize, spanAllocWorkBuf)
   })
   if s == nil {
      throw("out of memory")
   }
   // Record the new span in the busy list.
   lock(&work.wbufSpans.lock)
   work.wbufSpans.busy.insert(s)
   unlock(&work.wbufSpans.lock)
}

mheap分配記憶體使用了mmap,繼續懷疑是mmap返回的錯誤碼在docker中不是非0

func sysMap(v unsafe.Pointer, n uintptr, sysStat *sysMemStat) {
   sysStat.add(int64(n))
   p, err := mmap(v, n, _PROT_READ| _PROT_WRITE, _MAP_ANON| _MAP_FIXED| _MAP_PRIVATE, -1, 0)
   if err == _ENOMEM {
      throw("runtime: out of memory")
   }
   if p != v || err != 0 {
      throw("runtime: cannot map pages in arena address space")
   }
}

為了對比驗證,用c寫一段呼叫mmap的程式碼,在同一個docker中同時跑看下

#include <sys/mman.h>
#include <unistd.h>
#include <errno.h>
#include <stdio.h>
#define BUF_SIZE 393216
void main() {
    char *addr;
    int i;
    for(i=0;i<1000000;i++) {
        addr = (char *)mmap(NULL, BUF_SIZE, PROT_READ | PROT_WRITE,
                MAP_ANONYMOUS | MAP_PRIVATE, -1, 0);
        if (addr != MAP_FAILED) {
            addr[0] = 'a';
            addr[BUF_SIZE-1] = 'b';
            printf("i:%d, sz: %d, addr[0]: %c, addr[-1]: %cn", i, BUF_SIZE, addr[0], addr[BUF_SIZE-1]);
            munmap(addr, BUF_SIZE);
        } else {
            printf("error no: %dn", errno);
        }
        usleep(1000000);
    }
}

mmap沒有失敗,而且同樣會hang住,說明不是golang機制的問題,應該是阻塞在了系統呼叫上。檢視呼叫堆疊,發現是hang在了cgroup中

[<ffffffff81224d65>] mem_cgroup_oom_synchronize+0x275/0x340
[<ffffffff811a068f>] pagefault_out_of_memory+0x2f/0x74
[<ffffffff81066bed>] __do_page_fault+0x4bd/0x4f0
[<ffffffff81801605>] async_page_fault+0x45/0x50
[<ffffffffffffffff>] 0xffffffffffffffff

檢視go程式,也有相同的呼叫堆疊

[<ffffffff81103681>] futex_wait_queue_me+0xc1/0x120
[<ffffffff81104086>] futex_wait+0xf6/0x250
[<ffffffff8110647b>] do_futex+0x2fb/0xb20
[<ffffffff81106d1a>] SyS_futex+0x7a/0x170
[<ffffffff81003948>] do_syscall_64+0x68/0x100
[<ffffffff81800081>] entry_SYSCALL_64_after_hwframe+0x3d/0xa2
[<ffffffffffffffff>] 0xffffffffffffffff
[<ffffffff810f3ffe>] hrtimer_nanosleep+0xce/0x1e0
[<ffffffff810f419b>] SyS_nanosleep+0x8b/0xa0
[<ffffffff81003948>] do_syscall_64+0x68/0x100
[<ffffffff81800081>] entry_SYSCALL_64_after_hwframe+0x3d/0xa2
[<ffffffffffffffff>] 0xffffffffffffffff
[<ffffffff81224c5a>] mem_cgroup_oom_synchronize+0x16a/0x340
[<ffffffff811a068f>] pagefault_out_of_memory+0x2f/0x74
[<ffffffff81066bed>] __do_page_fault+0x4bd/0x4f0
[<ffffffff81801605>] async_page_fault+0x45/0x50
[<ffffffffffffffff>] 0xffffffffffffffff
[<ffffffff81224c5a>] mem_cgroup_oom_synchronize+0x16a/0x340
[<ffffffff811a068f>] pagefault_out_of_memory+0x2f/0x74
[<ffffffff81066bed>] __do_page_fault+0x4bd/0x4f0
[<ffffffff81801605>] async_page_fault+0x45/0x50
[<ffffffffffffffff>] 0xffffffffffffffff
[<ffffffff81224c5a>] mem_cgroup_oom_synchronize+0x16a/0x340
[<ffffffff811a068f>] pagefault_out_of_memory+0x2f/0x74
[<ffffffff81066bed>] __do_page_fault+0x4bd/0x4f0
[<ffffffff81801605>] async_page_fault+0x45/0x50
[<ffffffffffffffff>] 0xffffffffffffffff

看了下cgroup記憶體控制的程式碼,策略是沒有可用記憶體並且未設定oom kill的程式,會鎖在一個等待佇列裡,當有可用記憶體時再從隊首喚醒。這個邏輯沒辦法通過設定或者其他方式繞過去。

elixir.bootlin.com/linux/v4.14…

 /**
 * mem_cgroup_oom_synchronize - complete memcg OOM handling
 * @handle: actually kill/wait or just clean up the OOM state
 *
 * This has to be called at the end of a page fault if the memcg OOM
 * handler was enabled.
 *
 * Memcg supports userspace OOM handling where failed allocations must
 * sleep on a waitqueue until the userspace task resolves the
 * situation.  Sleeping directly in the charge context with all kinds
 * of locks held is not a good idea, instead we remember an OOM state
 * in the task and mem_cgroup_oom_synchronize() has to be called at
 * the end of the page fault to complete the OOM handling.
 *
 * Returns %true if an ongoing memcg OOM situation was detected and
 * completed, %false otherwise.
 */
bool mem_cgroup_oom_synchronize(bool handle)
{
        struct mem_cgroup *memcg = current->memcg_in_oom;
        struct oom_wait_info owait;
        bool locked;
        /* OOM is global, do not handle */
        if (!memcg)
                return false;
        if (!handle)
                goto cleanup;
        owait.memcg = memcg;
        owait.wait.flags = 0;
        owait.wait.func = memcg_oom_wake_function;
        owait.wait.private = current;
        INIT_LIST_HEAD(&owait.wait.entry);
        prepare_to_wait(&memcg_oom_waitq, &owait.wait, TASK_KILLABLE);
        mem_cgroup_mark_under_oom(memcg);
        locked = mem_cgroup_oom_trylock(memcg);
        if (locked)
                mem_cgroup_oom_notify(memcg);
        if (locked && !memcg->oom_kill_disable) {
                mem_cgroup_unmark_under_oom(memcg);
                finish_wait(&memcg_oom_waitq, &owait.wait);
                mem_cgroup_out_of_memory(memcg, current->memcg_oom_gfp_mask,
                                         current->memcg_oom_order);
        } else {
                schedule();
                mem_cgroup_unmark_under_oom(memcg);
                finish_wait(&memcg_oom_waitq, &owait.wait);
        }
        if (locked) {
                mem_cgroup_oom_unlock(memcg);
                /*
 * There is no guarantee that an OOM-lock contender
 * sees the wakeups triggered by the OOM kill
 * uncharges.  Wake any sleepers explicitly.
 */
                memcg_oom_recover(memcg);
        }
cleanup:
        current->memcg_in_oom = NULL;
        css_put(&memcg->css);
        return true;
}

結論:

docker記憶體耗光後,golang在gc的mark階段,需要申請新的記憶體記錄被標記的物件時,需要呼叫mmap,因為沒有可用記憶體,就會被hang在cgroup中,gc無法完成也就無法釋放記憶體,就會導致golang程式一直在stw階段,無法對外服務,即使壓力下降也無法恢復。最好還是不要關閉docker的oom-kill

以上就是golang程序在docker中OOM後hang住問題解析的詳細內容,更多關於golang程序docker OOM hang的資料請關注it145.com其它相關文章!


IT145.com E-mail:sddin#qq.com