日本搞逼视频_黄色一级片免费在线观看_色99久久_性明星video另类hd_欧美77_综合在线视频

國內最全IT社區平臺 聯系我們 | 收藏本站
阿里云優惠2
您當前位置:首頁 > php開源 > 綜合技術 > Android5.0 源碼研究---進程間通信 Linux內核源碼部分解析

Android5.0 源碼研究---進程間通信 Linux內核源碼部分解析

來源:程序員人生   發布時間:2016-03-30 12:41:50 閱讀次數:2918次

甚么叫進程間通訊:

1般而言,進程有單獨的地址空間。我們可以了解下可履行程序被裝載到內存后建立的1系列映照等理解這1點。如此以來意味著如果我們有兩個進程(進程A和進程B),那末,在進程A中聲明的數據對進程B是不可用的。而且,進程B看不到進程A中產生的事件,反之亦然。如果進程A和B1起工作來完成某個任務,必須有1個在兩個進程間通訊信息和時間的方法。

進程和線程不同,相同進程中的線程中的1些數據可以同享,編程人員需要斟酌的是數據間的同步,而進程則不行,必須有進程間通訊的機制。

Android中進程間通訊的機制的設計:

思考下在Android系統當中我們1般用到進程間通訊的地方在于 Service和其余組件通訊,比如Activity等。

設計師們采取的是binder機制來實現進程間的通訊,binder是粘合劑的意思,在Android中有以下3種組件來控制進程間通訊:

1. Server

2. Client

3. Service Manager

現在我們來摹擬下1個 Service A和1個Activity B間的通訊,假定他們在不同的進程間通訊

首先系統會啟動1個Service Manger,這個組件獨立于需要通訊的組件在1個進程當中,首先它會成為1個守護進程,Service A會掛載在Service Manger中,作為Server,等待要求的到來,而此時Activity B則是作為Client,當他們需要通訊時,Service Manager會調和2者之間的動作。

下面會根據上面介紹的進程來1步1步詳細解析binder機制:

Service Manager的作用:

先前有過簡略的介紹,Service Manager的作用是1個守護進程,我們可以理解為1個管理者的角色,它會給Server提供遠程接口用來掛載server,同時會給client提供查詢接口,來獲得Server的服務,我們無妨把1次進程間的通訊理解為1次服務的進程,兩個進程1個為另外一個提供1次服務,而Service Manager就是1個平臺,可以在平臺之上由進程之間提供服務,下面看下Service Manager的入口:


int main(int argc, char **argv) { struct binder_state *bs; bs = binder_open(128*1024); if (!bs) { ALOGE("failed to open binder driver "); return ⑴; } if (binder_become_context_manager(bs)) { ALOGE("cannot become context manager (%s) ", strerror(errno)); return ⑴; } //...........................(安全機制檢查 省略) binder_loop(bs, svcmgr_handler); return 0; }
首先看第1段:



struct binder_state *bs; bs = binder_open(128*1024);
struct binder_state {     int fd; //文件描寫符     void *mapped; //映照在進程空間的起始地址     size_t mapsize;//映照的大小 };
這個函數是 打開Binder裝備文件,并將信息填充在binder_state bs 這個結構體中,binder_open:



struct binder_state *binder_open(size_t mapsize) { struct binder_state *bs; struct binder_version vers; bs = malloc(sizeof(*bs)); if (!bs) { errno = ENOMEM; return NULL; } bs->fd = open("/dev/binder", O_RDWR); if (bs->fd < 0) { fprintf(stderr,"binder: cannot open device (%s) ", strerror(errno)); goto fail_open; } if ((ioctl(bs->fd, BINDER_VERSION, &vers) == ⑴) || (vers.protocol_version != BINDER_CURRENT_PROTOCOL_VERSION)) { fprintf(stderr, "binder: driver version differs from user space "); goto fail_open; } bs->mapsize = mapsize; bs->mapped = mmap(NULL, mapsize, PROT_READ, MAP_PRIVATE, bs->fd, 0); if (bs->mapped == MAP_FAILED) { fprintf(stderr,"binder: cannot map device (%s) ", strerror(errno)); goto fail_map; } return bs; fail_map: close(bs->fd); fail_open: free(bs); return NULL; }
?我們可以看到的是,在binder_open中主要是對bs這個結構體中的成員進行填充,包括文件描寫符、映照在進程空間的起始地址、映照大小,而這個結構體對象將在以后對內存進行操作中起到相當重要的作用。


在函數中有1條語句:


bs->fd = open("/dev/binder", O_RDWR);
這個函數的主要作用是創建1個struct binder_proc數據結構來保存打開裝備文件/dev/binder的進程的上下文信息。


在這個結構體重有4個重要的的成員變量,分別是4個紅黑樹:

threads:用來保存用戶要求的線程
nodes:用來保存binder實體
refs_by_desc樹和refs_by_node用來保存binder的援用

到這里我們看到了費力心機打開的文件的第1個用處,用來保存client要求時的1些重要的信息,這些信息會在以后的通訊進程中起到重要的作用。

回到binder_open函數,在打開完文件后下1步有這樣1條語句:


bs->mapped = mmap(NULL, mapsize, PROT_READ, MAP_PRIVATE, bs->fd, 0);
這個函數也是binder機制的精華所在,它會同時在進程的空間開辟1片區域,同時也會在Linux的內核當中映照1片區域,這樣做的好處是只需要把Client進程空間的數據拷貝1次到內核空間,然后Server與內核同享這個數據就能夠了,全部進程只需要履行1次內存拷貝,提高了效力,同時這也是進程通訊最核心的地方。


1般來講我們需要映照的大小不超過4M,在內核函數binder_mmap中會做相應的檢查。

下面進入到:


int binder_become_context_manager(struct binder_state *bs) { return ioctl(bs->fd, BINDER_SET_CONTEXT_MGR, 0); }
這1段語句的主要目的是讓Service Manager成為1個守護進程,可以理解為成為1個管理者


解釋1下ioctl()函數:

ioctl是裝備驅動程序中對裝備的I/O通道進行管理的函數。

對Android Linux內核真正調用的是Binder驅動程序的binder_ioctl函數,下面對這個函數做1下扼要的分析:

目前調用這個函數的終究的目的在于使得Service Manager成為1個守護進程,而在函數中有1些重要的結構體

struct binder_thread

它會將和這個守護進程有關聯的線程組織起來,在他的成員變量中有transaction_stack表示線程正在處理的事務,todo表示發往該線程的數據列表等,這些將在以后組織server與client通訊的進程中起到作用。

Service Manager成為1名管理者后它將進入1個循環,在Server掛載上來后循環則會1直等待Client的要求,回到main函數,也就進入了最后1步,插1句題外話,在進入最后1步之前我們可以看到在main函數中注釋掉了1部份,這1部份是Android 5.0相對Android 2.0新增的1個安全檢查機制,不做過量的描寫,最后1步調用的是binder_loop()這個函數,看下實現:


void binder_loop(struct binder_state *bs, binder_handler func) { int res; struct binder_write_read bwr; uint32_t readbuf[32]; bwr.write_size = 0; bwr.write_consumed = 0; bwr.write_buffer = 0; readbuf[0] = BC_ENTER_LOOPER; binder_write(bs, readbuf, sizeof(uint32_t)); for (;;) { bwr.read_size = sizeof(readbuf); bwr.read_consumed = 0; bwr.read_buffer = (uintptr_t) readbuf; res = ioctl(bs->fd, BINDER_WRITE_READ, &bwr); if (res < 0) { ALOGE("binder_loop: ioctl failed (%s) ", strerror(errno)); break; } res = binder_parse(bs, 0, (uintptr_t) readbuf, bwr.read_consumed, func); if (res == 0) { ALOGE("binder_loop: unexpected reply?! "); break; } if (res < 0) { ALOGE("binder_loop: io error %d %s ", res, strerror(errno)); break; } } }
解釋1下binder_write_read 結構體:



struct binder_write_read { signed long write_size; /* bytes to write */ signed long write_consumed; /* bytes consumed by driver */ unsigned long write_buffer; signed long read_size; /* bytes to read */ signed long read_consumed; /* bytes consumed by driver */ unsigned long read_buffer; };
在函數的開頭我們定義了1個binder_write_read bwr結構體變量,這個結構體將用于IO操作


接下來for (;;),這是1個無窮的循環,我們理解為1直循環等待要求就能夠了。

Service 是如何成為1個Server的,又如何掛載到Service Manager上?

在上1段中我們探討了Service Manager是如何成為1個通訊間的管理者,那末作為通訊的服務真個1個Service 是如何掛載到Service Manager上成為1個Server的,將在本節詳細介紹,我們以MediaPlayerService作為例子講授:

我們知道進程間的通訊是本地方法完成的,所以MediaPlayerService也對應有1個本地的native 父類叫做:BnMediaPlayerService 這個類用于處理進程間通訊

在MediaPlayerService 啟動的main方法中 有這么關鍵的幾句:


spproc(ProcessState::self()); spsm = defaultServiceManager(); MediaPlayerService::instantiate(); ProcessState::self()->startThreadPool(); IPCThreadState::self()->joinThreadPool();
在第1句中出現了1個ProcessState 此處有必要介紹下 實際上 BnMediaPlayerService是使用了IPCThreadState接收Client處發送過來的要求,而IPCThreadState又借助了ProcessState類來與Binder驅動程序交互(與驅動程序交互可以理解為向內存數據的讀寫,進程間通訊的方式正是對內存讀寫的進程)。


下面我們進入到ProcessState::self()這個函數中:


spProcessState::self() { Mutex::Autolock _l(gProcessMutex); if (gProcess != NULL) { return gProcess; } gProcess = new ProcessState; return gProcess; }
調用后返回1個gProcess全局唯1實例,我們先前說過以后我們將通過這個對象對binder驅動進行調用,也就是對內存進行讀寫,再看下ProcessState的構造函數:



ProcessState::ProcessState() : mDriverFD(open_driver()) , mVMStart(MAP_FAILED) , mManagesContexts(false) , mBinderContextCheckFunc(NULL) , mBinderContextUserData(NULL) , mThreadPoolStarted(false) , mThreadPoolSeq(1) { if (mDriverFD >= 0) { #if !defined(HAVE_WIN32_IPC) mVMStart = mmap(0, BINDER_VM_SIZE, PROT_READ, MAP_PRIVATE | MAP_NORESERVE, mDriverFD, 0); if (mVMStart == MAP_FAILED) { // *sigh* ALOGE("Using /dev/binder failed: unable to mmap transaction memory. "); close(mDriverFD); mDriverFD = ⑴; } #else mDriverFD = ⑴; #endif } LOG_ALWAYS_FATAL_IF(mDriverFD < 0, "Binder driver could not be opened. Terminating."); }
看完構造函數我們便豁然開朗了,有了上1節的基礎我們可以很快找到 open_driver() 和 mmap函數,1個打開驅動文件 1個在進程內存和內核內存中作出映照。


關注下 此處open_driver()的實現:


static int open_driver() { int fd = open("/dev/binder", O_RDWR); if (fd >= 0) { fcntl(fd, F_SETFD, FD_CLOEXEC); int vers = 0; status_t result = ioctl(fd, BINDER_VERSION, &vers); if (result == ⑴) { ALOGE("Binder ioctl to obtain version failed: %s", strerror(errno)); close(fd); fd = ⑴; } if (result != 0 || vers != BINDER_CURRENT_PROTOCOL_VERSION) { ALOGE("Binder driver protocol does not match user space protocol!"); close(fd); fd = ⑴; } size_t maxThreads = 15; result = ioctl(fd, BINDER_SET_MAX_THREADS, &maxThreads); if (result == ⑴) { ALOGE("Binder ioctl to set max threads failed: %s", strerror(errno)); } } else { ALOGW("Opening /dev/binder failed: %s ", strerror(errno)); } return fd; }
在履行open后有兩次調用ioctl()的進程,先前解釋過,ioctl這個函數是進行IO操作的函數,第1次調用是寫入1個版本號,第2次是寫入1個最大的線程數,我們思考下此時MediaPlayerService是作為1個Server,也就是說能夠接受的最大client訪問數是15。


我們返回到ProcessState對象的構造函數當中 接下來我們調用的mmap函數 在函數中第2個參數

#define BINDER_VM_SIZE ((1*1024*1024) - (4096 *2))
這個參數設定下來,就帶表已在進程內存中預留了(1*1024*1024) - (4096 *2)大小的空間了。至此已初步具有了和內存交互的能力了。


回到我們的MediaPlayerService 啟動的main方法中下1步:


spsm = defaultServiceManager();
實際上它本質上是1個BpServiceManager,包括了1個句柄值為0的Binder援用,可以理解為拿到1個Service Manager 實例就能夠了。


拿到實例以后MediaPlayerService 啟動的main方法中履行:


MediaPlayerService::instantiate();
void MediaPlayerService::instantiate() { defaultServiceManager()->addService( String16("media.player"), new MediaPlayerService()); }
這便是我們1直在說的將MediaPlayerService 掛載在Service Manager上通過addService方法:



virtual status_t addService(const String16& name, const sp& service, bool allowIsolated) { Parcel data, reply; data.writeInterfaceToken(IServiceManager::getInterfaceDescriptor()); data.writeString16(name); data.writeStrongBinder(service); data.writeInt32(allowIsolated ? 1 : 0); status_t err = remote()->transact(ADD_SERVICE_TRANSACTION, data, &reply); return err == NO_ERROR ? reply.readExceptionCode() : err; }
在掛載進程中我們頻繁的看到了Parcel 的使用

第1步我們向Pacel數據中寫入1個RPC頭部其中包括1個數字與1個字符串



data.writeInterfaceToken(IServiceManager::getInterfaceDescriptor());

再次寫入1個字符串到Pacel中:



data.writeString16(name);

下面是重要的1步:



data.writeStrongBinder(service);
status_t Parcel::writeStrongBinder(const sp& val) { return flatten_binder(ProcessState::self(), val, this); }
status_t flatten_binder(const sp& /*proc*/, const sp& binder, Parcel* out) { flat_binder_object obj; obj.flags = 0x7f | FLAT_BINDER_FLAG_ACCEPTS_FDS; if (binder != NULL) { IBinder *local = binder->localBinder(); if (!local) { BpBinder *proxy = binder->remoteBinder(); if (proxy == NULL) { ALOGE("null proxy"); } const int32_t handle = proxy ? proxy->handle() : 0; obj.type = BINDER_TYPE_HANDLE; obj.binder = 0; /* Dont pass uninitialized stack data to a remote process */ obj.handle = handle; obj.cookie = 0; } else { obj.type = BINDER_TYPE_BINDER; obj.binder = reinterpret_cast(local->getWeakRefs()); obj.cookie = reinterpret_cast(local); } } else { obj.type = BINDER_TYPE_BINDER; obj.binder = 0; obj.cookie = 0; } return finish_flatten_binder(binder, obj, out); }
首先介紹下flat_binder_object 結構體 其實將他理解為1個binder對象就能夠了:



struct flat_binder_object { /* 8 bytes for large_flat_header. */ unsigned long type; unsigned long flags; /* 8 bytes of data. */ union { void *binder; /* local object */ signed long handle; /* remote object */ }; /* extra data associated with local object */ void *cookie; };
我們可以看到在flatten_binder中首先是對unsigned long flags; 進行初始化:0x7f表示處理本Binder實體要求數據包的線程的最低優先級,FLAT_BINDER_FLAG_ACCEPTS_FDS表示這個Binder實體可以接受文件描寫符,Binder實體在收到文件描寫符時,就會在本進程中打開這個文件。


在可以看到我們的const sp& binder 參數1層1層追蹤上去就是1個MediaPlayerService實例,1定不為空,所以函數1定會履行:


obj.type = BINDER_TYPE_BINDER; obj.binder = reinterpret_cast(local->getWeakRefs()); obj.cookie = reinterpret_cast(local);
Binder實體地址的指針local保存在flat_binder_obj的成員變量cookie中,完成對3個成員的填充后:



finish_flatten_binder(binder, obj, out)
inline static status_t finish_flatten_binder( const sp& binder, const flat_binder_object& flat, Parcel* out) { return out->writeObject(flat, false); }
最后將我們準備好的flat_binder_object,也就是binder對象寫入到Pacel中去。


此時我們可以返回到addService函數中了,此時我們的最后1步:


status_t err = remote()->transact(ADD_SERVICE_TRANSACTION, data, &reply);
函數終究是調用的:



status_t IPCThreadState::transact(int32_t handle, uint32_t code, const Parcel& data, Parcel* reply, uint32_t flags) { status_t err = data.errorCheck(); flags |= TF_ACCEPT_FDS; IF_LOG_TRANSACTIONS() { TextOutput::Bundle _b(alog); alog << "BC_TRANSACTION thr " << (void*)pthread_self() << " / hand " << handle << " / code " << TypeCode(code) << ": " << indent << data << dedent << endl; } if (err == NO_ERROR) { LOG_ONEWAY(">>>> SEND from pid %d uid %d %s", getpid(), getuid(), (flags & TF_ONE_WAY) == 0 ? "READ REPLY" : "ONE WAY"); err = writeTransactionData(BC_TRANSACTION, flags, handle, code, data, NULL); } if (err != NO_ERROR) { if (reply) reply->setError(err); return (mLastError = err); } if ((flags & TF_ONE_WAY) == 0) { #if 0 if (code == 4) { // relayout ALOGI(">>>>>> CALLING transaction 4"); } else { ALOGI(">>>>>> CALLING transaction %d", code); } #endif if (reply) { err = waitForResponse(reply); } else { Parcel fakeReply; err = waitForResponse(&fakeReply); } #if 0 if (code == 4) { // relayout ALOGI("<<<<<< RETURNING transaction 4"); } else { ALOGI("<<<<<< RETURNING transaction %d", code); } #endif IF_LOG_TRANSACTIONS() { TextOutput::Bundle _b(alog); alog << "BR_REPLY thr " << (void*)pthread_self() << " / hand " << handle << ": "; if (reply) alog << indent << *reply << dedent << endl; else alog << "(none requested)" << endl; } } else { err = waitForResponse(NULL, NULL); } return err; }
我們來分析這個函數,在之前我們在service 的啟動main函數中初始化了1個Pacel對象叫做data(我們寫入了1些數據),回想1下:



Parcel data, reply; data.writeInterfaceToken(IServiceManager::getInterfaceDescriptor()); data.writeString16(name); data.writeStrongBinder(service); data.writeInt32(allowIsolated ? 1 : 0);
在這當中我們填入了許多關于這個MediaPlayerService的信息,現在我們將在transact中調用writeTransactionData


status_t IPCThreadState::writeTransactionData(int32_t cmd, uint32_t binderFlags, int32_t handle, uint32_t code, const Parcel& data, status_t* statusBuffer) { binder_transaction_data tr; tr.target.ptr = 0; /* Dont pass uninitialized stack data to a remote process */ tr.target.handle = handle; tr.code = code; tr.flags = binderFlags; tr.cookie = 0; tr.sender_pid = 0; tr.sender_euid = 0; const status_t err = data.errorCheck(); if (err == NO_ERROR) { tr.data_size = data.ipcDataSize(); tr.data.ptr.buffer = data.ipcData(); tr.offsets_size = data.ipcObjectsCount()*sizeof(binder_size_t); tr.data.ptr.offsets = data.ipcObjects(); } else if (statusBuffer) { tr.flags |= TF_STATUS_CODE; *statusBuffer = err; tr.data_size = sizeof(status_t); tr.data.ptr.buffer = reinterpret_cast(statusBuffer); tr.offsets_size = 0; tr.data.ptr.offsets = 0; } else { return (mLastError = err); } mOut.writeInt32(cmd); mOut.write(&tr, sizeof(tr)); return NO_ERROR; }

writeTransactionData的函數作用很明顯,將data對象中的信息填入到 binder_transaction_data tr中,具體的寫入進程我們可以不深究,而data就是有關于MediaPlayerService實例的有關信息。


在writeTransactionData函數的末尾我們之前初始化的binder_transaction_data tr寫入到mOut中。

接下來在transact中會進入到waitForResponse中

status_t IPCThreadState::waitForResponse(Parcel *reply, status_t *acquireResult) { int32_t cmd; int32_t err; while (1) { if ((err=talkWithDriver()) < NO_ERROR) break; err = mIn.errorCheck(); if (err < NO_ERROR) break; if (mIn.dataAvail() == 0) continue; cmd = mIn.readInt32(); IF_LOG_COMMANDS() { alog << "Processing waitForResponse Command: " << getReturnString(cmd) << endl; } switch (cmd) { case BR_TRANSACTION_COMPLETE: if (!reply && !acquireResult) goto finish; break; case BR_DEAD_REPLY: err = DEAD_OBJECT; goto finish; case BR_FAILED_REPLY: err = FAILED_TRANSACTION; goto finish; case BR_ACQUIRE_RESULT: { ALOG_ASSERT(acquireResult != NULL, "Unexpected brACQUIRE_RESULT"); const int32_t result = mIn.readInt32(); if (!acquireResult) continue; *acquireResult = result ? NO_ERROR : INVALID_OPERATION; } goto finish; case BR_REPLY: { binder_transaction_data tr; err = mIn.read(&tr, sizeof(tr)); ALOG_ASSERT(err == NO_ERROR, "Not enough command data for brREPLY"); if (err != NO_ERROR) goto finish; if (reply) { if ((tr.flags & TF_STATUS_CODE) == 0) { reply->ipcSetDataReference( reinterpret_cast(tr.data.ptr.buffer), tr.data_size, reinterpret_cast(tr.data.ptr.offsets), tr.offsets_size/sizeof(binder_size_t), freeBuffer, this); } else { err = *reinterpret_cast(tr.data.ptr.buffer); freeBuffer(NULL, reinterpret_cast(tr.data.ptr.buffer), tr.data_size, reinterpret_cast(tr.data.ptr.offsets), tr.offsets_size/sizeof(binder_size_t), this); } } else { freeBuffer(NULL, reinterpret_cast(tr.data.ptr.buffer), tr.data_size, reinterpret_cast(tr.data.ptr.offsets), tr.offsets_size/sizeof(binder_size_t), this); continue; } } goto finish; default: err = executeCommand(cmd); if (err != NO_ERROR) goto finish; break; } } finish: if (err != NO_ERROR) { if (acquireResult) *acquireResult = err; if (reply) reply->setError(err); mLastError = err; } return err; }


在循環的頭部調用了talkWithDriver():

status_t IPCThreadState::talkWithDriver(bool doReceive) { if (mProcess->mDriverFD <= 0) { return -EBADF; } binder_write_read bwr; // Is the read buffer empty? const bool needRead = mIn.dataPosition() >= mIn.dataSize(); // We dont want to write anything if we are still reading // from data left in the input buffer and the caller // has requested to read the next data. const size_t outAvail = (!doReceive || needRead) ? mOut.dataSize() : 0; bwr.write_size = outAvail; bwr.write_buffer = (uintptr_t)mOut.data(); // This is what well read. if (doReceive && needRead) { bwr.read_size = mIn.dataCapacity(); bwr.read_buffer = (uintptr_t)mIn.data(); } else { bwr.read_size = 0; bwr.read_buffer = 0; } IF_LOG_COMMANDS() { TextOutput::Bundle _b(alog); if (outAvail != 0) { alog << "Sending commands to driver: " << indent; const void* cmds = (const void*)bwr.write_buffer; const void* end = ((const uint8_t*)cmds)+bwr.write_size; alog << HexDump(cmds, bwr.write_size) << endl; while (cmds < end) cmds = printCommand(alog, cmds); alog << dedent; } alog << "Size of receive buffer: " << bwr.read_size << ", needRead: " << needRead << ", doReceive: " << doReceive << endl; } // Return immediately if there is nothing to do. if ((bwr.write_size == 0) && (bwr.read_size == 0)) return NO_ERROR; bwr.write_consumed = 0; bwr.read_consumed = 0; status_t err; do { IF_LOG_COMMANDS() { alog << "About to read/write, write size = " << mOut.dataSize() << endl; } #if defined(HAVE_ANDROID_OS) if (ioctl(mProcess->mDriverFD, BINDER_WRITE_READ, &bwr) >= 0) err = NO_ERROR; else err = -errno; #else err = INVALID_OPERATION; #endif if (mProcess->mDriverFD <= 0) { err = -EBADF; } IF_LOG_COMMANDS() { alog << "Finished read/write, write size = " << mOut.dataSize() << endl; } } while (err == -EINTR); IF_LOG_COMMANDS() { alog << "Our err: " << (void*)(intptr_t)err << ", write consumed: " << bwr.write_consumed << " (of " << mOut.dataSize() << "), read consumed: " << bwr.read_consumed << endl; } if (err >= NO_ERROR) { if (bwr.write_consumed > 0) { if (bwr.write_consumed < mOut.dataSize()) mOut.remove(0, bwr.write_consumed); else mOut.setDataSize(0); } if (bwr.read_consumed > 0) { mIn.setDataSize(bwr.read_consumed); mIn.setDataPosition(0); } IF_LOG_COMMANDS() { TextOutput::Bundle _b(alog); alog << "Remaining data size: " << mOut.dataSize() << endl; alog << "Received commands from driver: " << indent; const void* cmds = mIn.data(); const void* end = mIn.data() + mIn.dataSize(); alog << HexDump(cmds, mIn.dataSize()) << endl; while (cmds < end) cmds = printReturnCommand(alog, cmds); alog << dedent; } return NO_ERROR; } return err; }
talkWithDriver()主要的作用是和binder驅動進行交互,也就是進行內存的讀寫以到達數據的同享,目的就是將先前準備了很久的數據寫入到Service Manager 所在的進程當中,而寫入的數據就是有關于MediaPlayerService實例的有關信息,我們準備了這么多回歸原則就是需要將MediaPlayerService掛載到Manager Service上


但是Manager Service和MediaPlayerService其實不在同1進程中,所以掛載這1操作也是需要進程間的通訊。

此時在talkWithDriver()中

我們首先將先前準備好的mOut對象寫入:


bwr.write_size = outAvail; bwr.write_buffer = (uintptr_t)mOut.data();
緊接著:



if (doReceive && needRead) { bwr.read_size = mIn.dataCapacity(); bwr.read_buffer = (uintptr_t)mIn.data(); } else { bwr.read_size = 0; bwr.read_buffer = 0; }
if語句1定為真,所以會履行:



bwr.read_size = mIn.dataCapacity(); bwr.read_buffer = (uintptr_t)mIn.data();
接下來:



ioctl(mProcess->mDriverFD, BINDER_WRITE_READ, &bwr)
這個函數我們已比較熟習了,就是和驅動進行交互的函數,履行的命令是:BINDER_WRITE_READ,與驅動交互的數據是:bwr


進入到ioctl函數中我們關注下命令為BINDER_WRITE_READ的部份:


case BINDER_WRITE_READ: { struct binder_write_read bwr; if (size != sizeof(struct binder_write_read)) { ret = -EINVAL; goto err; } if (copy_from_user(&bwr, ubuf, sizeof(bwr))) { ret = -EFAULT; goto err; } if (binder_debug_mask & BINDER_DEBUG_READ_WRITE) printk(KERN_INFO "binder: %d:%d write %ld at %08lx, read %ld at %08lx ", proc->pid, thread->pid, bwr.write_size, bwr.write_buffer, bwr.read_size, bwr.read_buffer); if (bwr.write_size > 0) { ret = binder_thread_write(proc, thread, (void __user *)bwr.write_buffer, bwr.write_size, &bwr.write_consumed); if (ret < 0) { bwr.read_consumed = 0; if (copy_to_user(ubuf, &bwr, sizeof(bwr))) ret = -EFAULT; goto err; } } if (bwr.read_size > 0) { ret = binder_thread_read(proc, thread, (void __user *)bwr.read_buffer, bwr.read_size, &bwr.read_consumed, filp->f_flags & O_NONBLOCK); if (!list_empty(&proc->todo)) wake_up_interruptible(&proc->wait); if (ret < 0) { if (copy_to_user(ubuf, &bwr, sizeof(bwr))) ret = -EFAULT; goto err; } } if (binder_debug_mask & BINDER_DEBUG_READ_WRITE) printk(KERN_INFO "binder: %d:%d wrote %ld of %ld, read return %ld of %ld ", proc->pid, thread->pid, bwr.write_consumed, bwr.write_size, bwr.read_consumed, bwr.read_size); if (copy_to_user(ubuf, &bwr, sizeof(bwr))) { ret = -EFAULT; goto err; } break; }
我們知道由于先前我們將mOut對象寫入了bwr 所以這里的bwr.write_size > 0


函數會進入到


binder_thread_write(proc, thread, (void __user *)bwr.write_buffer, bwr.write_size, &bwr.write_consumed);
這里解釋1下 我們可以看到有1個thread參數,它是1個線程,而這個線程正是Service Manager所在的線程。


在binder_thread_write中下面是關鍵:


struct binder_transaction_data tr; if (copy_from_user(&tr, ptr, sizeof(tr))) return -EFAULT; ptr += sizeof(tr); binder_transaction(proc, thread, &tr, cmd == BC_REPLY); break;
我們可以看首先將數據拷入到binder_transaction_data tr中,緊接著調用binder_transaction(proc, thread, &tr, cmd == BC_REPLY)函數:


這個函數做了以下這么幾件事:

1 1個待處理事務t和1個待完成工作項tcomplete,并履行初始化工作,而將來的工作就是處理數據,處理的數據就是我們1直在提的MediaPlayerService實例。

2 在Service Manager的進程空間中分配1塊內存來保存數據

終究啟動 Service Manager來處理數據。

之前我們1直調用的是binder_thread_write,在Service Manager喚醒后,先前寫,現在該將數據讀入到Service Manager中了,因而該調用binder_thread_read簡單的解釋下這個函數的原理,函數的終究目的是將數據拷入到Service Manager的進程空間的緩沖區中。

這1步完成以后會返回到binder_loop中:


void binder_loop(struct binder_state *bs, binder_handler func) { int res; struct binder_write_read bwr; uint32_t readbuf[32]; bwr.write_size = 0; bwr.write_consumed = 0; bwr.write_buffer = 0; readbuf[0] = BC_ENTER_LOOPER; binder_write(bs, readbuf, sizeof(uint32_t)); for (;;) { bwr.read_size = sizeof(readbuf); bwr.read_consumed = 0; bwr.read_buffer = (uintptr_t) readbuf; res = ioctl(bs->fd, BINDER_WRITE_READ, &bwr); if (res < 0) { ALOGE("binder_loop: ioctl failed (%s) ", strerror(errno)); break; } res = binder_parse(bs, 0, (uintptr_t) readbuf, bwr.read_consumed, func); if (res == 0) { ALOGE("binder_loop: unexpected reply?! "); break; } if (res < 0) { ALOGE("binder_loop: io error %d %s ", res, strerror(errno)); break; } } }
函數中我們調用了binder_parse函數進行數據解析,解析的是readbuf:



int binder_parse(struct binder_state *bs, struct binder_io *bio, uintptr_t ptr, size_t size, binder_handler func) { int r = 1; uintptr_t end = ptr + (uintptr_t) size; while (ptr < end) { uint32_t cmd = *(uint32_t *) ptr; ptr += sizeof(uint32_t); #if TRACE fprintf(stderr,"%s: ", cmd_name(cmd)); #endif switch(cmd) { case BR_NOOP: break; case BR_TRANSACTION_COMPLETE: break; case BR_INCREFS: case BR_ACQUIRE: case BR_RELEASE: case BR_DECREFS: #if TRACE fprintf(stderr," %p, %p ", (void *)ptr, (void *)(ptr + sizeof(void *))); #endif ptr += sizeof(struct binder_ptr_cookie); break; case BR_TRANSACTION: { struct binder_transaction_data *txn = (struct binder_transaction_data *) ptr; if ((end - ptr) < sizeof(*txn)) { ALOGE("parse: txn too small! "); return ⑴; } binder_dump_txn(txn); if (func) { unsigned rdata[256/4]; struct binder_io msg; struct binder_io reply; int res; bio_init(&reply, rdata, sizeof(rdata), 4); bio_init_from_txn(&msg, txn); res = func(bs, txn, &msg, &reply); binder_send_reply(bs, &reply, txn->data.ptr.buffer, res); } ptr += sizeof(*txn); break; } //........ } } return r; }
在函數中還做了reply的初始化,最后解析完成的數據就能夠供Service Manager使用了,此時我們回到遙遠的Service Manager啟動主函數中,真正使用解析后的數據的函數是:



int svcmgr_handler(struct binder_state *bs, struct binder_transaction_data *txn, struct binder_io *msg, struct binder_io *reply) { struct svcinfo *si; uint16_t *s; size_t len; uint32_t handle; uint32_t strict_policy; int allow_isolated; //ALOGI("target=%x code=%d pid=%d uid=%d ", // txn->target.handle, txn->code, txn->sender_pid, txn->sender_euid); if (txn->target.handle != svcmgr_handle) return ⑴; if (txn->code == PING_TRANSACTION) return 0; // Equivalent to Parcel::enforceInterface(), reading the RPC // header with the strict mode policy mask and the interface name. // Note that we ignore the strict_policy and dont propagate it // further (since we do no outbound RPCs anyway). strict_policy = bio_get_uint32(msg); s = bio_get_string16(msg, &len); if (s == NULL) { return ⑴; } if ((len != (sizeof(svcmgr_id) / 2)) || memcmp(svcmgr_id, s, sizeof(svcmgr_id))) { fprintf(stderr,"invalid id %s ", str8(s, len)); return ⑴; } if (sehandle && selinux_status_updated() > 0) { struct selabel_handle *tmp_sehandle = selinux_android_service_context_handle(); if (tmp_sehandle) { selabel_close(sehandle); sehandle = tmp_sehandle; } } switch(txn->code) { case SVC_MGR_GET_SERVICE: case SVC_MGR_CHECK_SERVICE: s = bio_get_string16(msg, &len); if (s == NULL) { return ⑴; } handle = do_find_service(bs, s, len, txn->sender_euid, txn->sender_pid); if (!handle) break; bio_put_ref(reply, handle); return 0; case SVC_MGR_ADD_SERVICE: s = bio_get_string16(msg, &len); if (s == NULL) { return ⑴; } handle = bio_get_ref(msg); allow_isolated = bio_get_uint32(msg) ? 1 : 0; if (do_add_service(bs, s, len, handle, txn->sender_euid, allow_isolated, txn->sender_pid)) return ⑴; break; case SVC_MGR_LIST_SERVICES: { uint32_t n = bio_get_uint32(msg); if (!svc_can_list(txn->sender_pid)) { ALOGE("list_service() uid=%d - PERMISSION DENIED ", txn->sender_euid); return ⑴; } si = svclist; while ((n-- > 0) && si) si = si->next; if (si) { bio_put_string16(reply, si->name); return 0; } return ⑴; } default: ALOGE("unknown code %d ", txn->code); return ⑴; } bio_put_uint32(reply, 0); return 0; }
其實當中的核心部份在于:



do_add_service(bs, s, len, handle, txn->sender_euid,allow_isolated, txn->sender_pid)
這個函數的實現很簡單,就是把MediaPlayerService這個Binder實體的援用寫到1個struct svcinfo結構體中,主要是它的名稱和句柄值,然后插入到鏈接svclist的頭部去。這樣,Client來向Service Manager查詢服務接口時,只要給定服務名稱,Service Manger就能夠返回相應的句柄值了。
 從這里層層返回,最后回到MediaPlayerService::instantiate函數中,至此,IServiceManager::addService終究履行終了了。


我們來回顧下全部進程做1遍梳理:

1 我們使用了ProcessState這個類來作為和進程交互的接口,在這個類的構造函數中,我們打開驅動文件和在內存中作出映照,在和內核交互的ioctl函數中我們初始化了1些參數 包括最大支持的并發要求數,和Binder驅動程序就為當前進程預留的BINDER_VM_SIZE大小的內存空間

2 將MediaPlayerService實例的數據寫入到Pacel中去,再將這個Pacel包裝到binder中

3 在Service Manager的進程空間中開辟1片緩存區域,將binder數據寫入到緩存區域中

4 將緩存數據解析,并將MediaPlayerService實例鏈接svclist的頭部方便以后的client的查詢

5 清除緩存數據

到這里addService就做完了,看似復雜其實每步都非常關鍵。

在addService 完成后Service 就掛載到了Service Manager,現在我們回到Service 啟動的main函數中上接下來調用的是:

ProcessState::self()->startThreadPool(); IPCThreadState::self()->joinThreadPool();
void ProcessState::startThreadPool() { AutoMutex _l(mLock); if (!mThreadPoolStarted) { mThreadPoolStarted = true; spawnPooledThread(true); } }
 這里調用spwanPooledThread:



void ProcessState::spawnPooledThread(bool isMain) { if (mThreadPoolStarted) { String8 name = makeBinderThreadName(); ALOGV("Spawning new pooled thread, name=%s ", name.string()); spt = new PoolThread(isMain); t->run(name.string()); } }
這里主要是創建1個線程,PoolThread繼續Thread類,,其run函數終究調用子類的threadLoop函數:



virtual bool threadLoop() { IPCThreadState::self()->joinThreadPool(mIsMain); return false; }
跟進到joinThreadPool()中:
void IPCThreadState::joinThreadPool(bool isMain) { LOG_THREADPOOL("**** THREAD %p (PID %d) IS JOINING THE THREAD POOL ", (void*)pthread_self(), getpid()); mOut.writeInt32(isMain ? BC_ENTER_LOOPER : BC_REGISTER_LOOPER); // This thread may have been spawned by a thread that was in the background // scheduling group, so first we will make sure it is in the foreground // one to avoid performing an initial transaction in the background. set_sched_policy(mMyThreadId, SP_FOREGROUND); status_t result; do { processPendingDerefs(); // now get the next command to be processed, waiting if necessary result = getAndExecuteCommand(); if (result < NO_ERROR && result != TIMED_OUT && result != -ECONNREFUSED && result != -EBADF) { ALOGE("getAndExecuteCommand(fd=%d) returned unexpected error %d, aborting", mProcess->mDriverFD, result); abort(); } // Let this thread exit the thread pool if it is no longer // needed and it is not the main process thread. if(result == TIMED_OUT && !isMain) { break; } } while (result != -ECONNREFUSED && result != -EBADF); LOG_THREADPOOL("**** THREAD %p (PID %d) IS LEAVING THE THREAD POOL err=%p ", (void*)pthread_self(), getpid(), (void*)result); mOut.writeInt32(BC_EXIT_LOOPER); talkWithDriver(false); }
這個函數終究是在1個無窮循環中,通過調用talkWithDriver函數來和Binder驅動程序進行交互,實際上就是調用talkWithDriver來等待Client的要求,然后再調用executeCommand來處理要求,至此Server的掛載進程就全部完成了。


Client如何取得Service Manager的接口?

之前我們是以MediaPlayerService舉得例子,現在我們用mediaplayer來作為說明,mediaplayer繼承于IMediaDeathNotifier,打開IMediaDeathNotifier的實現第1個函數便是getMediaPlayerService(),很明顯這個函數就是取得Service接口的開始:


IMediaDeathNotifier::getMediaPlayerService() { ALOGV("getMediaPlayerService"); Mutex::Autolock _l(sServiceLock); if (sMediaPlayerService == 0) { spsm = defaultServiceManager(); spbinder; do { binder = sm->getService(String16("media.player")); if (binder != 0) { break; } ALOGW("Media player service not published, waiting..."); usleep(500000); // 0.5 s } while (true); if (sDeathNotifier == NULL) { sDeathNotifier = new DeathNotifier(); } binder->linkToDeath(sDeathNotifier); sMediaPlayerService = interface_cast(binder); } ALOGE_IF(sMediaPlayerService == 0, "no media player service!?"); return sMediaPlayerService; }


 接下去的while循環是通過sm->getService接口來不斷嘗試取得名稱為“media.player”的Service,即MediaPlayerService。為何要通過這無窮循環來得MediaPlayerService呢?由于這時候候MediaPlayerService可能還沒有啟動起來,所以這里如果發現取回來的binder接口為NULL,就睡眠0.5秒,然后再嘗試獲得,這是獲得Service接口的標準做法。

我們直接進入到getService中:


virtual spgetService(const String16& name) const { unsigned n; for (n = 0; n < 5; n++){ spsvc = checkService(name); if (svc != NULL) return svc; LOGI("Waiting for service %s... ", String8(name).string()); sleep(1); } return NULL; }
我們可以看到在getService仍然是調用了checkService():



virtual spcheckService( const String16& name) const { Parcel data, reply; data.writeInterfaceToken(IServiceManager::getInterfaceDescriptor()); data.writeString16(name); remote()->transact(CHECK_SERVICE_TRANSACTION, data, &reply); return reply.readStrongBinder(); }
和上1節1樣,1樣是先準備好兩個Parcel,這里向Pacel中寫入RPC頭部和上1節1樣都是 “media.player”,下面調用transact()以后的進程和上1節基本類似,不想寫了


生活不易,碼農辛苦
如果您覺得本網站對您的學習有所幫助,可以手機掃描二維碼進行捐贈
程序員人生
------分隔線----------------------------
分享到:
------分隔線----------------------------
關閉
程序員人生
主站蜘蛛池模板: 超碰97人人干 | 亚洲免费黄色 | 日日夜夜影院 | 99久久99| 在线国产专区 | 亚洲欧美xxx | 久久亚洲精品小早川怜子66 | 99久久久久久 | 91精品成人 | 久久综合av | 国产欧美精品一区二区色综合 | 欧美综合一区 | 爱情岛论坛首页永久网址 | 好吊视频一区二区三区 | 精品天堂| 亚洲日韩视频 | 一区二区三区四区日韩 | 51ⅴ精品国产91久久久久久 | 亚洲午夜精品视频 | 激情一区二区三区 | 视频在线一区二区 | 亚洲 欧美 变态 国产 另类 | 精品久久久一区二区 | 69视频免费看 | 国产中文字幕在线播放 | 999视频在线观看 | 久久久久久久久综合 | 欧美视频一区二区三区 | 国产精品一区二区三区不卡 | 久久久久夜夜夜精品国产 | jizz在线观看 | 淫影视| 国产激情精品一区二区三区 | 在线观看1区 | 国产一区二区三区视频在线 | 中文字幕电影在线观看 | 天天干干| 久久久久久毛片 | 久九九久频精品短视频 | 免费成人一级片 | 国产综合久久久 |