<em>Mac</em>Book项目 2009年学校开始实施<em>Mac</em>Book项目,所有师生配备一本<em>Mac</em>Book,并同步更新了校园无线网络。学校每周进行电脑技术更新,每月发送技术支持资料,极大改变了教学及学习方式。因此2011
2021-06-01 09:32:01
本文將由淺入深的介紹reactor,深入淺出的封裝epoll,一步步變成reactor模型,並在文末介紹reactor的四種模型。
reactor是一種高並行伺服器模型,是一種框架,一個概念,所以reactor沒有一個固定的程式碼,可以有很多變種,後續會介紹到。
組成:⾮阻塞的IO(如果是阻塞IO,傳送緩衝區滿了怎麼辦,就阻塞了) + io多路復⽤;特徵:基於事件迴圈,以事件驅動或者事件回撥的⽅式來實現業務邏輯。
reactor中的IO使用的是select,poll,epoll這些IO多路複用,使用IO多路複用系統不必建立維護大量執行緒,只使用一個執行緒、一個選擇器就可同時處理成千上萬連線,大大減少了系統開銷。
reactor中文譯為反應堆,將epoll中的IO變成事件驅動,比如讀事件,寫事件。來了個讀事件,立馬進行反應,執行提前註冊好的事件回撥函數。
回想一下普通函數呼叫的機制:程式呼叫某函數,函數執行,程式等待,函數將結果和控制權返回給程式,程式繼續處理。reactor反應堆,是一種事件驅動機制,和普通函數呼叫的不同之處在於:應用程式不是主動的呼叫某個 API 完成處理,而是恰恰相反,reactor逆置了事件處理流程,應用程式需要提供相應的介面並註冊到 reactor上,如果相應的事件發生,reactor將主動呼叫應用程式註冊的介面,這些介面又稱為“回撥函數”。
說白了,reactor就是對epoll進行封裝,進行網路IO與業務的解耦,將epoll管理IO變成管理事件,整個程式由事件進行驅動執行。就像下圖一樣,有就緒事件返回,reactor:由事件驅動執行對應的回撥函數;epoll:需要自己判斷。
reactor是處理並行 I/O 比較常見的一種模式,用於同步 I/O,中心思想是將所有要處理的 I/O 事件註冊到一箇中心 I/O 多路複用器(epoll)上,同時主執行緒/程序阻塞在多路複用器上;
一旦有 I/O 事件到來或是準備就緒(檔案描述符或 socket 可讀、寫),多路複用器返回並將事先註冊的相應 I/O 事件分發到對應的處理器中。
reactor模型有三個重要的元件
多路複用器:由作業系統提供,在 linux 上一般是 select, poll, epoll 等系統呼叫。
事件分發器:將多路複用器中返回的就緒事件分到對應的處理常式中。
事件處理器:負責處理特定事件的處理常式。
具體流程:
我們知道一個連線對應一個檔案描述符fd,對於這個連線(fd)來說,它有自己的事件(讀,寫)。我們將fd都設定成非阻塞的,所以這裡我們需要新增兩個buffer,至於大小就是看業務需求了。
struct ntyevent { int fd;//socket fd int events;//事件 char sbuffer[BUFFER_LENGTH];//寫緩衝buffer int slength; char rbuffer[BUFFER_LENGTH];//讀緩衝buffer int rlength; // typedef int (*NtyCallBack)(int, int, void *); NtyCallBack callback;//回撥函數 void *arg; int status;//1MOD 0 null };
我們知道socket fd已經被封裝成了ntyevent,那麼有多少個ntyevent呢?這裡demo初始化reactor的時候其實是將*events指向了一個1024的ntyevent陣列(按照道理來說使用者端連線可以一直連,不止1024個使用者端,後續文章有解決方案,這裡從簡)。epfd肯定要封裝進行,不用多說。
struct ntyreactor { int epfd; struct ntyevent *events; //struct ntyevent events[1024]; };
前面已經說了,把事件寫成回撥函數,這裡的引數fd肯定要知道自己的哪個連線,events是什麼事件的意思,arg傳的是ntyreactor (考慮到後續多執行緒多程序,如果將ntyreactor設為全域性感覺不太好 )
typedef int (*NtyCallBack)(int, int, void *); int recv_cb(int fd, int events, void *arg); int send_cb(int fd, int events, void *arg); int accept_cb(int fd, int events, void *arg);
具兩個例子,我們知道第一個socket一定是listenfd,用來監聽用的,那麼首先肯定是設定ntyevent的各項屬性。 本來是讀事件,讀完後要改成寫事件,那麼必然要把原來的讀回撥函數設定成寫事件回撥。
void nty_event_set(struct ntyevent *ev, int fd, NtyCallBack callback, void *arg) { ev->fd = fd; ev->callback = callback; ev->events = 0; ev->arg = arg; }
int nty_event_add(int epfd, int events, struct ntyevent *ntyev) { struct epoll_event ev = {0, {0}}; ev.data.ptr = ntyev; ev.events = ntyev->events = events; int op; if (ntyev->status == 1) { op = EPOLL_CTL_MOD; } else { op = EPOLL_CTL_ADD; ntyev->status = 1; } if (epoll_ctl(epfd, op, ntyev->fd, &ev) < 0) { printf("event add failed [fd=%d], events[%d],err:%s,err:%dn", ntyev->fd, events, strerror(errno), errno); return -1; } return 0; }
int nty_event_del(int epfd, struct ntyevent *ev) { struct epoll_event ep_ev = {0, {0}}; if (ev->status != 1) { return -1; } ep_ev.data.ptr = ev; ev->status = 0; epoll_ctl(epfd, EPOLL_CTL_DEL, ev->fd, &ep_ev); //epoll_ctl(epfd, EPOLL_CTL_DEL, ev->fd, NULL); return 0; }
這裡就是被觸發的回撥函數,具體程式碼要與業務結合,這裡的參考意義不大(這裡就是讀一次,改成寫事件)
int recv_cb(int fd, int events, void *arg) { struct ntyreactor *reactor = (struct ntyreactor *) arg; struct ntyevent *ntyev = &reactor->events[fd]; int len = recv(fd, ntyev->buffer, BUFFER_LENGTH, 0); nty_event_del(reactor->epfd, ntyev); if (len > 0) { ntyev->length = len; ntyev->buffer[len] = ' '; printf("C[%d]:%sn", fd, ntyev->buffer); nty_event_set(ntyev, fd, send_cb, reactor); nty_event_add(reactor->epfd, EPOLLOUT, ntyev); } else if (len == 0) { close(ntyev->fd); printf("[fd=%d] pos[%ld], closedn", fd, ntyev - reactor->events); } else { close(ntyev->fd); printf("recv[fd=%d] error[%d]:%sn", fd, errno, strerror(errno)); } return len; }
這裡就是被觸發的回撥函數,具體程式碼要與業務結合,這裡的參考意義不大(將讀事件讀的資料寫回,再改成讀事件,相當於echo)
int send_cb(int fd, int events, void *arg) { struct ntyreactor *reactor = (struct ntyreactor *) arg; struct ntyevent *ntyev = &reactor->events[fd]; int len = send(fd, ntyev->buffer, ntyev->length, 0); if (len > 0) { printf("send[fd=%d], [%d]%sn", fd, len, ntyev->buffer); nty_event_del(reactor->epfd, ntyev); nty_event_set(ntyev, fd, recv_cb, reactor); nty_event_add(reactor->epfd, EPOLLIN, ntyev); } else { close(ntyev->fd); nty_event_del(reactor->epfd, ntyev); printf("send[fd=%d] error %sn", fd, strerror(errno)); } return len; }
本質上就是accept,然後將其加入到epoll監聽
int accept_cb(int fd, int events, void *arg) { struct ntyreactor *reactor = (struct ntyreactor *) arg; if (reactor == NULL) return -1; struct sockaddr_in client_addr; socklen_t len = sizeof(client_addr); int clientfd; if ((clientfd = accept(fd, (struct sockaddr *) &client_addr, &len)) == -1) { printf("accept: %sn", strerror(errno)); return -1; } printf("client fd = %dn", clientfd); if ((fcntl(clientfd, F_SETFL, O_NONBLOCK)) < 0) { printf("%s: fcntl nonblocking failed, %dn", __func__, MAX_EPOLL_EVENTS); return -1; } nty_event_set(&reactor->events[clientfd], clientfd, recv_cb, reactor); nty_event_add(reactor->epfd, EPOLLIN, &reactor->events[clientfd]); printf("new connect [%s:%d][time:%ld], pos[%d]n", inet_ntoa(client_addr.sin_addr), ntohs(client_addr.sin_port), reactor->events[clientfd].last_active, clientfd); return 0; }
就是將原來的epoll_wait從main函數中封裝到ntyreactor_run函數中
int ntyreactor_run(struct ntyreactor *reactor) { if (reactor == NULL) return -1; if (reactor->epfd < 0) return -1; if (reactor->events == NULL) return -1; struct epoll_event events[MAX_EPOLL_EVENTS]; int checkpos = 0, i; while (1) { int nready = epoll_wait(reactor->epfd, events, MAX_EPOLL_EVENTS, 1000); if (nready < 0) { printf("epoll_wait error, exitn"); continue; } for (i = 0; i < nready; i++) { struct ntyevent *ev = (struct ntyevent *) events[i].data.ptr; ev->callback(ev->fd, events[i].events, ev->arg); } } }
後續會出一篇測試百萬連線數量的文章
#include <stdio.h> #include <stdlib.h> #include <string.h> #include <sys/socket.h> #include <sys/epoll.h> #include <arpa/inet.h> #include <fcntl.h> #include <unistd.h> #include <errno.h> #include <time.h> #define BUFFER_LENGTH 4096 #define MAX_EPOLL_EVENTS 1024 #define SERVER_PORT 8082 typedef int (*NtyCallBack)(int, int, void *); struct ntyevent { int fd; int events; void *arg; NtyCallBack callback; int status;//1MOD 0 null char buffer[BUFFER_LENGTH]; int length; long last_active; }; struct ntyreactor { int epfd; struct ntyevent *events; }; int recv_cb(int fd, int events, void *arg); int send_cb(int fd, int events, void *arg); int accept_cb(int fd, int events, void *arg); void nty_event_set(struct ntyevent *ev, int fd, NtyCallBack callback, void *arg) { ev->fd = fd; ev->callback = callback; ev->events = 0; ev->arg = arg; ev->last_active = time(NULL); } int nty_event_add(int epfd, int events, struct ntyevent *ntyev) { struct epoll_event ev = {0, {0}}; ev.data.ptr = ntyev; ev.events = ntyev->events = events; int op; if (ntyev->status == 1) { op = EPOLL_CTL_MOD; } else { op = EPOLL_CTL_ADD; ntyev->status = 1; } if (epoll_ctl(epfd, op, ntyev->fd, &ev) < 0) { printf("event add failed [fd=%d], events[%d],err:%s,err:%dn", ntyev->fd, events, strerror(errno), errno); return -1; } return 0; } int nty_event_del(int epfd, struct ntyevent *ev) { struct epoll_event ep_ev = {0, {0}}; if (ev->status != 1) { return -1; } ep_ev.data.ptr = ev; ev->status = 0; epoll_ctl(epfd, EPOLL_CTL_DEL, ev->fd, &ep_ev); //epoll_ctl(epfd, EPOLL_CTL_DEL, ev->fd, NULL); return 0; } int recv_cb(int fd, int events, void *arg) { struct ntyreactor *reactor = (struct ntyreactor *) arg; struct ntyevent *ntyev = &reactor->events[fd]; int len = recv(fd, ntyev->buffer, BUFFER_LENGTH, 0); nty_event_del(reactor->epfd, ntyev); if (len > 0) { ntyev->length = len; ntyev->buffer[len] = ' '; printf("C[%d]:%sn", fd, ntyev->buffer); nty_event_set(ntyev, fd, send_cb, reactor); nty_event_add(reactor->epfd, EPOLLOUT, ntyev); } else if (len == 0) { close(ntyev->fd); printf("[fd=%d] pos[%ld], closedn", fd, ntyev - reactor->events); } else { close(ntyev->fd); printf("recv[fd=%d] error[%d]:%sn", fd, errno, strerror(errno)); } return len; } int send_cb(int fd, int events, void *arg) { struct ntyreactor *reactor = (struct ntyreactor *) arg; struct ntyevent *ntyev = &reactor->events[fd]; int len = send(fd, ntyev->buffer, ntyev->length, 0); if (len > 0) { printf("send[fd=%d], [%d]%sn", fd, len, ntyev->buffer); nty_event_del(reactor->epfd, ntyev); nty_event_set(ntyev, fd, recv_cb, reactor); nty_event_add(reactor->epfd, EPOLLIN, ntyev); } else { close(ntyev->fd); nty_event_del(reactor->epfd, ntyev); printf("send[fd=%d] error %sn", fd, strerror(errno)); } return len; } int accept_cb(int fd, int events, void *arg) { struct ntyreactor *reactor = (struct ntyreactor *) arg; if (reactor == NULL) return -1; struct sockaddr_in client_addr; socklen_t len = sizeof(client_addr); int clientfd; if ((clientfd = accept(fd, (struct sockaddr *) &client_addr, &len)) == -1) { printf("accept: %sn", strerror(errno)); return -1; } printf("client fd = %dn", clientfd); if ((fcntl(clientfd, F_SETFL, O_NONBLOCK)) < 0) { printf("%s: fcntl nonblocking failed, %dn", __func__, MAX_EPOLL_EVENTS); return -1; } nty_event_set(&reactor->events[clientfd], clientfd, recv_cb, reactor); nty_event_add(reactor->epfd, EPOLLIN, &reactor->events[clientfd]); printf("new connect [%s:%d][time:%ld], pos[%d]n", inet_ntoa(client_addr.sin_addr), ntohs(client_addr.sin_port), reactor->events[clientfd].last_active, clientfd); return 0; } int init_sock(short port) { int fd = socket(AF_INET, SOCK_STREAM, 0); struct sockaddr_in server_addr; memset(&server_addr, 0, sizeof(server_addr)); server_addr.sin_family = AF_INET; server_addr.sin_addr.s_addr = htonl(INADDR_ANY); server_addr.sin_port = htons(port); bind(fd, (struct sockaddr *) &server_addr, sizeof(server_addr)); if (listen(fd, 20) < 0) { printf("listen failed : %sn", strerror(errno)); } return fd; } int ntyreactor_init(struct ntyreactor *reactor) { if (reactor == NULL) return -1; memset(reactor, 0, sizeof(struct ntyreactor)); reactor->epfd = epoll_create(1); if (reactor->epfd <= 0) { printf("create epfd in %s err %sn", __func__, strerror(errno)); return -2; } reactor->events = (struct ntyevent *) malloc((MAX_EPOLL_EVENTS) * sizeof(struct ntyevent)); memset(reactor->events, 0, (MAX_EPOLL_EVENTS) * sizeof(struct ntyevent)); if (reactor->events == NULL) { printf("create epll events in %s err %sn", __func__, strerror(errno)); close(reactor->epfd); return -3; } return 0; } int ntyreactor_destory(struct ntyreactor *reactor) { close(reactor->epfd); free(reactor->events); } int ntyreactor_addlistener(struct ntyreactor *reactor, int sockfd, NtyCallBack acceptor) { if (reactor == NULL) return -1; if (reactor->events == NULL) return -1; nty_event_set(&reactor->events[sockfd], sockfd, acceptor, reactor); nty_event_add(reactor->epfd, EPOLLIN, &reactor->events[sockfd]); return 0; } _Noreturn int ntyreactor_run(struct ntyreactor *reactor) { if (reactor == NULL) return -1; if (reactor->epfd < 0) return -1; if (reactor->events == NULL) return -1; struct epoll_event events[MAX_EPOLL_EVENTS]; int checkpos = 0, i; while (1) { //心跳包 60s 超時則斷開連線 long now = time(NULL); for (i = 0; i < 100; i++, checkpos++) { if (checkpos == MAX_EPOLL_EVENTS) { checkpos = 0; } if (reactor->events[checkpos].status != 1 || checkpos == 3) { continue; } long duration = now - reactor->events[checkpos].last_active; if (duration >= 60) { close(reactor->events[checkpos].fd); printf("[fd=%d] timeoutn", reactor->events[checkpos].fd); nty_event_del(reactor->epfd, &reactor->events[checkpos]); } } int nready = epoll_wait(reactor->epfd, events, MAX_EPOLL_EVENTS, 1000); if (nready < 0) { printf("epoll_wait error, exitn"); continue; } for (i = 0; i < nready; i++) { struct ntyevent *ev = (struct ntyevent *) events[i].data.ptr; ev->callback(ev->fd, events[i].events, ev->arg); } } } int main(int argc, char *argv[]) { int sockfd = init_sock(SERVER_PORT); struct ntyreactor *reactor = (struct ntyreactor *) malloc(sizeof(struct ntyreactor)); if (ntyreactor_init(reactor) != 0) { return -1; } ntyreactor_addlistener(reactor, sockfd, accept_cb); ntyreactor_run(reactor); ntyreactor_destory(reactor); close(sockfd); return 0; }
reactor模式是編寫高效能網路伺服器的必備技術之一,它具有如下優點:
reactor模型開發效率上比起直接使用 IO 複用要高,它通常是單執行緒的,設計目標是希望單執行緒使用一顆 CPU 的全部資源,但也有附帶優點,即每個事件處理中很多時候可以不考慮共用資源的互斥存取。可是缺點也是明顯的,現在的硬體發展,已經不再遵循摩爾定律,CPU 的頻率受制於材料的限制不再有大的提升,而改為是從核數的增加上提升能力,當程式需要使用多核資源時,reactor模型就會悲劇。
如果程式業務很簡單,例如只是簡單的存取一些提供了並行存取的服務,就可以直接開啟多個反應堆,每個反應堆對應一顆 CPU 核心,這些反應堆上跑的請求互不相關,這是完全可以利用多核的。例如 Nginx 這樣的 http 靜態伺服器。
單reactor單執行緒模型,指的是所有的 IO 操作(讀,寫,建立連線)都在同一個執行緒上面完成
缺點:
相比於單reactor單執行緒模型,此模型中收到請求後,不在reactor執行緒計算,而是使用執行緒池來計算,這會充分的利用多核CPU。
採用此模式時有可能存在多個執行緒同時計算同一個連線上的多個請求,算出的結果的次序是不確定的, 所以需要網路框架在設計協定時帶一個id標示,以便以便讓使用者端區分response對應的是哪個request。
此模式的特點是每個執行緒一個迴圈, 有一個main reactor負責accept連線, 然後把該連線掛在某個sub reactor中,這樣該連線的所有操作都在那個sub reactor所處的執行緒中完成。
多個連線可能被分配到多個執行緒中,充分利用CPU。在應用場景中,reactor的個數可以採用 固定的個數,比如跟CPU數目一致。
此模型與單reactor多執行緒模型相比,減少了進出thread pool兩次上下文切換,小規模的計算可以在當前IO執行緒完成並且返回結果,降低響應的延遲。
並可以有效防止當IO壓力過大時一個reactor處理能力飽和問題。
此模型是上面兩個的混合體,它既使用多個 reactors 來處理 IO,又使用執行緒池來處理計算。此模式適適合既有突發IO(利用Multiple Reactor分擔),又有突發計算的應用(利用執行緒池把一個連線上的計算任務分配給多個執行緒)。
注意:
前面介紹的四種reactor 模式在具體實現時為了簡應該遵循的原則是:每個檔案描述符只由一個執行緒操作。
這樣可以輕輕鬆鬆解決訊息收發的順序性問題,也避免了關閉檔案描述符的各種race condition。一個執行緒可以操作多個檔案描述符,但是一個執行緒不能操作別的執行緒擁有的檔案描述符。
這一點不難做到。epoll也遵循了相同的原則。Linux檔案中並沒有說明,當一個執行緒證阻塞在epoll_wait時,另一個執行緒往epoll fd新增一個新的監控fd會發生什麼。
新fd上的事件會不會在此次epoll_wait呼叫中返回?為了穩妥起見,我們應該吧對同一個 epoll fd的操作(新增、刪除、修改等等)都放到同一個執行緒中執行。
由於fd的數量未知,這裡設計ntyreactor 裡面包含 eventblock ,eventblock 包含1024個fd。每個fd通過 fd/1024定位到在第幾個eventblock,通過fd%1024定位到在eventblock第幾個位置。
#include <stdio.h> #include <stdlib.h> #include <string.h> #include <sys/socket.h> #include <sys/epoll.h> #include <arpa/inet.h> #include <fcntl.h> #include <unistd.h> #include <errno.h> #define BUFFER_LENGTH 4096 #define MAX_EPOLL_EVENTS 1024 #define SERVER_PORT 8081 #define PORT_COUNT 100 typedef int (*NCALLBACK)(int, int, void *); struct ntyevent { int fd; int events; void *arg; NCALLBACK callback; int status; char buffer[BUFFER_LENGTH]; int length; }; struct eventblock { struct eventblock *next; struct ntyevent *events; }; struct ntyreactor { int epfd; int blkcnt; struct eventblock *evblk; }; int recv_cb(int fd, int events, void *arg); int send_cb(int fd, int events, void *arg); struct ntyevent *ntyreactor_find_event_idx(struct ntyreactor *reactor, int sockfd); void nty_event_set(struct ntyevent *ev, int fd, NCALLBACK *callback, void *arg) { ev->fd = fd; ev->callback = callback; ev->events = 0; ev->arg = arg; } int nty_event_add(int epfd, int events, struct ntyevent *ev) { struct epoll_event ep_ev = {0, {0}}; ep_ev.data.ptr = ev; ep_ev.events = ev->events = events; int op; if (ev->status == 1) { op = EPOLL_CTL_MOD; } else { op = EPOLL_CTL_ADD; ev->status = 1; } if (epoll_ctl(epfd, op, ev->fd, &ep_ev) < 0) { printf("event add failed [fd=%d], events[%d]n", ev->fd, events); return -1; } return 0; } int nty_event_del(int epfd, struct ntyevent *ev) { struct epoll_event ep_ev = {0, {0}}; if (ev->status != 1) { return -1; } ep_ev.data.ptr = ev; ev->status = 0; epoll_ctl(epfd, EPOLL_CTL_DEL, ev->fd, &ep_ev); return 0; } int recv_cb(int fd, int events, void *arg) { struct ntyreactor *reactor = (struct ntyreactor *) arg; struct ntyevent *ev = ntyreactor_find_event_idx(reactor, fd); int len = recv(fd, ev->buffer, BUFFER_LENGTH, 0); // nty_event_del(reactor->epfd, ev); if (len > 0) { ev->length = len; ev->buffer[len] = ' '; // printf("recv[%d]:%sn", fd, ev->buffer); printf("recv fd=[%dn", fd); nty_event_set(ev, fd, send_cb, reactor); nty_event_add(reactor->epfd, EPOLLOUT, ev); } else if (len == 0) { close(ev->fd); //printf("[fd=%d] pos[%ld], closedn", fd, ev-reactor->events); } else { close(ev->fd); // printf("recv[fd=%d] error[%d]:%sn", fd, errno, strerror(errno)); } return len; } int send_cb(int fd, int events, void *arg) { struct ntyreactor *reactor = (struct ntyreactor *) arg; struct ntyevent *ev = ntyreactor_find_event_idx(reactor, fd); int len = send(fd, ev->buffer, ev->length, 0); if (len > 0) { // printf("send[fd=%d], [%d]%sn", fd, len, ev->buffer); printf("send fd=[%dn]", fd); nty_event_del(reactor->epfd, ev); nty_event_set(ev, fd, recv_cb, reactor); nty_event_add(reactor->epfd, EPOLLIN, ev); } else { nty_event_del(reactor->epfd, ev); close(ev->fd); printf("send[fd=%d] error %sn", fd, strerror(errno)); } return len; } int accept_cb(int fd, int events, void *arg) {//非阻塞 struct ntyreactor *reactor = (struct ntyreactor *) arg; if (reactor == NULL) return -1; struct sockaddr_in client_addr; socklen_t len = sizeof(client_addr); int clientfd; if ((clientfd = accept(fd, (struct sockaddr *) &client_addr, &len)) == -1) { printf("accept: %sn", strerror(errno)); return -1; } if ((fcntl(clientfd, F_SETFL, O_NONBLOCK)) < 0) { printf("%s: fcntl nonblocking failed, %dn", __func__, MAX_EPOLL_EVENTS); return -1; } struct ntyevent *event = ntyreactor_find_event_idx(reactor, clientfd); nty_event_set(event, clientfd, recv_cb, reactor); nty_event_add(reactor->epfd, EPOLLIN, event); printf("new connect [%s:%d], pos[%d]n", inet_ntoa(client_addr.sin_addr), ntohs(client_addr.sin_port), clientfd); return 0; } int init_sock(short port) { int fd = socket(AF_INET, SOCK_STREAM, 0); fcntl(fd, F_SETFL, O_NONBLOCK); struct sockaddr_in server_addr; memset(&server_addr, 0, sizeof(server_addr)); server_addr.sin_family = AF_INET; server_addr.sin_addr.s_addr = htonl(INADDR_ANY); server_addr.sin_port = htons(port); bind(fd, (struct sockaddr *) &server_addr, sizeof(server_addr)); if (listen(fd, 20) < 0) { printf("listen failed : %sn", strerror(errno)); } return fd; } int ntyreactor_alloc(struct ntyreactor *reactor) { if (reactor == NULL) return -1; if (reactor->evblk == NULL) return -1; struct eventblock *blk = reactor->evblk; while (blk->next != NULL) { blk = blk->next; } struct ntyevent *evs = (struct ntyevent *) malloc((MAX_EPOLL_EVENTS) * sizeof(struct ntyevent)); if (evs == NULL) { printf("ntyreactor_alloc ntyevents failedn"); return -2; } memset(evs, 0, (MAX_EPOLL_EVENTS) * sizeof(struct ntyevent)); struct eventblock *block = (struct eventblock *) malloc(sizeof(struct eventblock)); if (block == NULL) { printf("ntyreactor_alloc eventblock failedn"); return -2; } memset(block, 0, sizeof(struct eventblock)); block->events = evs; block->next = NULL; blk->next = block; reactor->blkcnt++; // return 0; } struct ntyevent *ntyreactor_find_event_idx(struct ntyreactor *reactor, int sockfd) { int blkidx = sockfd / MAX_EPOLL_EVENTS; while (blkidx >= reactor->blkcnt) { ntyreactor_alloc(reactor); } int i = 0; struct eventblock *blk = reactor->evblk; while (i++ < blkidx && blk != NULL) { blk = blk->next; } return &blk->events[sockfd % MAX_EPOLL_EVENTS]; } int ntyreactor_init(struct ntyreactor *reactor) { if (reactor == NULL) return -1; memset(reactor, 0, sizeof(struct ntyreactor)); reactor->epfd = epoll_create(1); if (reactor->epfd <= 0) { printf("create epfd in %s err %sn", __func__, strerror(errno)); return -2; } struct ntyevent *evs = (struct ntyevent *) malloc((MAX_EPOLL_EVENTS) * sizeof(struct ntyevent)); if (evs == NULL) { printf("ntyreactor_alloc ntyevents failedn"); return -2; } memset(evs, 0, (MAX_EPOLL_EVENTS) * sizeof(struct ntyevent)); struct eventblock *block = (struct eventblock *) malloc(sizeof(struct eventblock)); if (block == NULL) { printf("ntyreactor_alloc eventblock failedn"); return -2; } memset(block, 0, sizeof(struct eventblock)); block->events = evs; block->next = NULL; reactor->evblk = block; reactor->blkcnt = 1; return 0; } int ntyreactor_destory(struct ntyreactor *reactor) { close(reactor->epfd); //free(reactor->events); struct eventblock *blk = reactor->evblk; struct eventblock *blk_next = NULL; while (blk != NULL) { blk_next = blk->next; free(blk->events); free(blk); blk = blk_next; } return 0; } int ntyreactor_addlistener(struct ntyreactor *reactor, int sockfd, NCALLBACK *acceptor) { if (reactor == NULL) return -1; if (reactor->evblk == NULL) return -1; struct ntyevent *event = ntyreactor_find_event_idx(reactor, sockfd); nty_event_set(event, sockfd, acceptor, reactor); nty_event_add(reactor->epfd, EPOLLIN, event); return 0; } _Noreturn int ntyreactor_run(struct ntyreactor *reactor) { if (reactor == NULL) return -1; if (reactor->epfd < 0) return -1; if (reactor->evblk == NULL) return -1; struct epoll_event events[MAX_EPOLL_EVENTS + 1]; int i; while (1) { int nready = epoll_wait(reactor->epfd, events, MAX_EPOLL_EVENTS, 1000); if (nready < 0) { printf("epoll_wait error, exitn"); continue; } for (i = 0; i < nready; i++) { struct ntyevent *ev = (struct ntyevent *) events[i].data.ptr; if ((events[i].events & EPOLLIN) && (ev->events & EPOLLIN)) { ev->callback(ev->fd, events[i].events, ev->arg); } if ((events[i].events & EPOLLOUT) && (ev->events & EPOLLOUT)) { ev->callback(ev->fd, events[i].events, ev->arg); } } } } // <remoteip, remoteport, localip, localport,protocol> int main(int argc, char *argv[]) { unsigned short port = SERVER_PORT; // listen 8081 if (argc == 2) { port = atoi(argv[1]); } struct ntyreactor *reactor = (struct ntyreactor *) malloc(sizeof(struct ntyreactor)); ntyreactor_init(reactor); int i = 0; int sockfds[PORT_COUNT] = {0}; for (i = 0; i < PORT_COUNT; i++) { sockfds[i] = init_sock(port + i); ntyreactor_addlistener(reactor, sockfds[i], accept_cb); } ntyreactor_run(reactor); ntyreactor_destory(reactor); for (i = 0; i < PORT_COUNT; i++) { close(sockfds[i]); } free(reactor); return 0; }
以上就是epoll封裝reactor原理剖析範例詳解的詳細內容,更多關於epoll封裝reactor的資料請關注it145.com其它相關文章!
相關文章
<em>Mac</em>Book项目 2009年学校开始实施<em>Mac</em>Book项目,所有师生配备一本<em>Mac</em>Book,并同步更新了校园无线网络。学校每周进行电脑技术更新,每月发送技术支持资料,极大改变了教学及学习方式。因此2011
2021-06-01 09:32:01
综合看Anker超能充系列的性价比很高,并且与不仅和iPhone12/苹果<em>Mac</em>Book很配,而且适合多设备充电需求的日常使用或差旅场景,不管是安卓还是Switch同样也能用得上它,希望这次分享能给准备购入充电器的小伙伴们有所
2021-06-01 09:31:42
除了L4WUDU与吴亦凡已经多次共事,成为了明面上的厂牌成员,吴亦凡还曾带领20XXCLUB全队参加2020年的一场音乐节,这也是20XXCLUB首次全员合照,王嗣尧Turbo、陈彦希Regi、<em>Mac</em> Ova Seas、林渝植等人全部出场。然而让
2021-06-01 09:31:34
目前应用IPFS的机构:1 谷歌<em>浏览器</em>支持IPFS分布式协议 2 万维网 (历史档案博物馆)数据库 3 火狐<em>浏览器</em>支持 IPFS分布式协议 4 EOS 等数字货币数据存储 5 美国国会图书馆,历史资料永久保存在 IPFS 6 加
2021-06-01 09:31:24
开拓者的车机是兼容苹果和<em>安卓</em>,虽然我不怎么用,但确实兼顾了我家人的很多需求:副驾的门板还配有解锁开关,有的时候老婆开车,下车的时候偶尔会忘记解锁,我在副驾驶可以自己开门:第二排设计很好,不仅配置了一个很大的
2021-06-01 09:30:48
不仅是<em>安卓</em>手机,苹果手机的降价力度也是前所未有了,iPhone12也“跳水价”了,发布价是6799元,如今已经跌至5308元,降价幅度超过1400元,最新定价确认了。iPhone12是苹果首款5G手机,同时也是全球首款5nm芯片的智能机,它
2021-06-01 09:30:45