码迷,mamicode.com
首页 > 编程语言 > 详细

Java review-basic3

时间:2016-09-14 00:04:45      阅读:202      评论:0      收藏:0      [点我收藏+]

标签:

Mutexes, ReadWriteLock, ArrayBlockingQueue, Thread pools, LinkedList vs ArrayList, Object Pooling, Read-Modify-Write, java.util.concurrent, java.util.concurrent.atomic, CAS, Collections, ADT, Java 5 Generics

1. Mutexes and Semaphore: 

A mutex provides mutual exclusion, either producer or consumer can have the key (mutex) and proceed with their work. As long as the buffer is filled by producer, the consumer needs to wait, and vice versa. At any point of time, only one thread can work with the entire buffer. The concept can be generalized using semaphore.

Using Semaphore(spilt resource as small part): A semaphore is a generalized mutex. In lieu of single buffer, we can split the 4 KB buffer into four 1 KB buffers (identical resources). A semaphore can be associated with these four buffers. The consumer and producer can work on different buffers at the same time.

个人理解:这两个词一定要会读,听到一定要知道

Mutex:可以理解为一把钥匙,只有拥有这把钥匙,线程才能访问相同的资源。并且在这过程中,只有一个线程能访问当前资源。

Semaphore:他是一种更加广泛的mutex,可以把资源分为较小的部分,然后不同的线程和访问他们相互独立的部分。

 

2.ReadWriteLock:

A java.util.concurrent.locks.ReadWriteLock is an advanced thread lock mechanism. It allows multiple threads to read a certain resource, but only one to write it, at a time.ReadWriteLock Locking Rules The rules by which a thread is allowed to lock the ReadWriteLock either for reading or writing the guarded resource, are as follows:

Read Lock

If no threads have locked the ReadWriteLock for writing, and no thread have requested a write lock (but not yet obtained it). Thus, multiple threads can lock the lock for reading.

Write Lock

If no threads are reading or writing. Thus, only one thread at a time can lock the lock for writing.

个人理解:

ReadwriteLock是一个线程锁机制,可以让一个资源同时被多个资源读,一个资源写:

1).当没有资源在写,且没有申请写锁的时候,可以被多个资源读

2).当没有写也没有读的时候,线程可以申请写锁。

 

3. ArrayBlockingQueue

ArrayBlockingQueue is a bounded, blocking queue that stores the elements internally in an array. That it is bounded means that it cannot store unlimited amounts of elements. There is an upper bound on the number of elements it can store at the same time. You set the upper bound at instantiation time, and after that it cannot be changed. The ArrayBlockingQueue stores the elements internally in FIFO (First In, First Out) order. The head of the queue is the element which has been in queue the longest time, and the tail of the queue is the element which has been in the queue the shortest time.

个人理解:

ArrayBlockingQueue:

他是有固定大小(fixed size)的队列,采取的是先进先出,在顶端的内容是最早的,尾端的内容是最晚的

 

4. Thread pool:

Most of the executor implementations in java.util.concurrent use thread pools, which consist of worker threads. This kind of thread exists separately from the Runnable and Callable tasks it executes and is often used to execute multiple tasks.

Using worker threads minimizes the overhead due to thread creation. Thread objects use a significant amount of memory, and in a large-scale application, allocating and de-allocating many thread objects creates a significant memory management overhead.

One common type of thread pool is the fixed thread pool. This type of pool always has a specified number of threads running; if a thread is somehow terminated while it is still in use, it is automatically replaced with a new thread. Tasks are submitted to the pool via an internal queue, which holds extra tasks whenever there are more active tasks than threads.

An important advantage of the fixed thread pool is that applications using it degrade gracefully. To understand this, consider a web server application where each HTTP request is handled by a separate thread. If the application simply creates a new thread for every new HTTP request, and the system receives more requests than it can handle immediately, the application will suddenly stop responding to all requests when the overhead of all those threads exceed the capacity of the system. With a limit on the number of the threads that can be created, the application will not be servicing HTTP requests as quickly as they come in, but it will be servicing them as quickly as the system can sustain.

个人理解:

线程池,是由worker tread组成,为了减小创建和销毁线程的开销,一般会使用fixed-size pool,任务会提交到pool的内部序列中。

对于http请求,会有将其放在线程池的请求队列中,保证系统中有固定个线程在运行,这样就不会crash。

 

6. LinkedList vs ArrayList:

1)Search: ArrayList search operation is pretty fast compared to the LinkedList search operation. get(int index) in ArrayList gives the performance of O(1) while LinkedList performance is O(n).

2)Deletion: LinkedList remove operation gives O(1) performance while ArrayList gives variable performance: O(n) in worst case (while removing first element) and O(1) in best case (While removing last element).

3)Inserts Performance: LinkedList add method gives O(1) performance while ArrayList gives O(n) in worst case. Reason is same as explained for remove.

4)Memory Overhead: ArrayList maintains indexes and element data while LinkedList maintains element data and two pointers for neighbor nodes hence the memory consumption is high in LinkedList comparatively.

个人理解:这一题问的是链表和数组的区别:

1.搜索效率数组更高,因为他可以通过index来进行访问

2.插入删除效率链表更高,因为不影响其他元素

3.扩大效率链表更高,因为不需要复制原先元素到新的地址

7. Object Pooling:

1)An object pool is a collection of a particular object that an application will create and keep on hand for those situations where creating each instance is expensive. A good example would be a database connection or a worker thread. The pool checks instances in and out for users like books out of a library.

2)Usually object pooling is handled by a Java EE application server. If you need to do it yourself, best to use something like Apache‘s object pool. Don‘t write one yourself; thread safety and other issues can make it complicated.

个人理解:

对象池是某一类特殊对象的集合,通常这一类对象创建起来代价很大,类似于数据库连接池, 并且通常不需要自己去创建,因为涉及到线程安全问题。

8. Read-Modify-Write:

read-modify-write is a class of atomic operations (such as test-and-set, fetch-and-add, and compare-and-swap) that both read a memory location and write a new value into it simultaneously, either with a completely new value or some function of the previous value. These operations prevent race conditions in multi-threaded applications. Typically they are used to implement mutexes or semaphores. These atomic operations are also heavily used in non-blocking synchronization.

个人理解:

把读和写合并成一个atomic操作,防止了race condition

 

9.CAS

CAS is generally much faster than locking, but it does depend on the degree of contention. Because CAS may force a retry if the value changes between reading and comparing, a thread can theoretically get stuck in a busy-wait if the variable in question is being hit hard by many other threads (or if it is expensive to compute a new value from the old value (or both)). The main issue with CAS is that it is much more difficult to program with correctly than locking. Mind you, locking is, in turn, much harder to use correctly than message-passing or STM, so don‘t take this as a ringing endorsement for the use of locks.

CAS是java.concurrent的基础。

CAS有3个操作数,内存值V,旧的预期值A,要修改的新值B。当且仅当预期值A和内存值V相同时,将内存值V修改为B,否则什么都不做。

 

Java review-basic3

标签:

原文地址:http://www.cnblogs.com/whaochen205/p/5867053.html

(0)
(0)
   
举报
评论 一句话评论(0
登录后才能评论!
© 2014 mamicode.com 版权所有  联系我们:gaon5@hotmail.com
迷上了代码!