Effective Learning Strategies: Understanding and Applying Chunking
Learn about the concept of 'chunking' and how it can help you absorb and retain information more effectively. Discover how chunking works in different contexts, from remembering tasks and information to optimizing code readability. Uncover practical tips for implementing chunking in your learning and coding processes to enhance comprehension and memory recall.
Download Presentation
Please find below an Image/Link to download the presentation.
The content on the website is provided AS IS for your information and personal use only. It may not be sold, licensed, or shared on other websites without obtaining consent from the author. Download presentation by click this link. If you encounter any issues during the download, it is possible that the publisher has removed the file from their server.
E N D
Presentation Transcript
Friday, September 22, 2017 1 National White Chocolate Day Reminders Homework #2 due 09/29 Project 2: Tasking due Thursday (10/05) Today Project 2, Steps 1 & 2 Chapter 4: Threading Next Time Chapter 5: Mutual Exclusion BYU CS 345 Threads
Tip #5: Chunking 2 A primary method that the brain uses to help it absorb more information is called 'chunking'. One chunk is a recognizable piece of information that be fitted into one of the 'slots' in short- term memory. The actual amount of information in a chunk is quite variable - it need only be recognizable as a single packet. A chunk may contain other chunks, which allows us to assimilate large amounts of information at once (e.g. we can remember and recognize a person as single chunk.) Examples: newspaper articles, phone numbers, IP addresses, lists, etc. Un-chunked List: Bread Ice cream Milk Tomatoes Eggs Butter Apples English muffins Frozen vegetables Lettuce Cream Bananas Chunked List: Frozen Foods Ice cream Frozen vegetables Chuncking makes block structured languages like C/C++ natural to use. The program is a single chunk, which is divided and subdivided into smaller and smaller chunks, each visibly and logically distinguishable. A knowledgeable reader can read in big chunks, while a less experience reader must work in smaller chunks. Thus the author of a function can read a complex expression as single chunk, while a new reader must break it down into variables and operators and understand these individual chunks before he can piece it back together into a single understood chunk. Bakery Bread Bagels Diary Milk Eggs Butter Cream Fruits / Vegetables Tomatoes Apples Lettuce Bananas BYU CS 345 Threads
http://syque.com/cstyle/contents.htm Tip #5: Chunking (continued) 3 Lay out code in chunks The smallest effective chunk is a single token (e.g. identifier, operator, keyword). Larger chunks such as parenthesized items, single comparisons, etc. build into larger expressions and statements. Groups of statements which together perform an identifiable act are also chunks, as are individual functions and complete subsystems. Items which are close together form a visual chunk and have an implied association. If they are separated, then they appear less associated. When an item is between two other items, it will appear more closely associated with the item to which it is positioned closest. Vertical white space (form feeds and blank lines) can be used not only to separate chunks but also to show relationships between chunks. By recognizing chunks and making them stand out, code can be made more readable which means more maintainable. BYU CS 345 Threads
Step 1: Priority Queue 4 Create a priority queue typedef int16_t TID; typedef int16_t Priority; typedef uint16_t* PQueue; Write queue functions to add/delete elements // task ID // task priority // priority queue TID enQ(PQueue q, TID tid, Priority p); TID deQ(PQueue q, TID tid); -> tid >=0 -1 TID tid -1 THOROUGHLY TEST BEFORE PROCEEDING! q (# | pr1/tid1 | pr2/tid2 | ) find and delete tid from q return highest priority tid (if found and deleted from q) (if q empty or task not found) Priority/TID Priority/TID Priority/TID Priority/TID # of entries BYU CS 345 Threads
Priority Queue 5 typedef struct { int16_t size; struct { } pq[MAX_TASKS]; } PQ; typedef uint16_t* PQ; Priority/TID Priority/TID Priority/TID Priority/TID # of entries uint8_t tid; uint8_t priority; PQ* rq = (PQ*)malloc(sizeof(PQ)); rq->size = 1; rq->pq[0].tid = tid; rq->pq[0].priority = priority; PQ rq = (PQ)malloc((MAX_TASKS+1) * sizeof(uint16_t)); rq[0] = 1; rq[1] = (priority << 8) + tid; rq->pq[0] = rq->pq[1]; rq[0] = rq[1]; BYU CS 345 Threads
P2 - Tasking Step 2: Schedule w/Ready Queue 6 Create a ready priority queue Priority/TID Priority/TID Priority/TID Priority/TID # of entries typedef uint16_t* PQ; // priority queue PQueue rq; rq = (int*)malloc(MAX_TASKS * sizeof(int)); rq[0] = 0; // ready queue // init ready queue Add new task to ready queue in createTask enQ(rq, tid, tcb[tid].priority); rq[5] rq[4] rq[3] rq[2] rq[1] rq[0] NOTE: priority = tcb[tid].priority 10 / 3 5 / 2 5 / 0 2 / 1 4 Change scheduler() to deQ/enQ next task if ((tid = deQ(rq, -1)) >= 0) { enQ(rq, tid, tcb[tid].priority); } BYU CS 345 Threads
P2 - Tasking 2-State Scheduler 7 enQ(rq, tic, tcb[tid].priority) in createTask dispatch() createTask() killTask() Ready Queue New Running Exit swapTask() if ((tid = deQ(rq, -1)) >= 0) enQ(rq, tid, tcb[tid].priority); BYU CS 345 Threads
Processes and Threads 8 Chapter 4
CS 345 9 # 4 Project P1: Shell Stalling s Chapter 1: Computer System Overview 2: Operating System Overview 3: Process Description and Control 4: Threads 5: Concurrency: ME and Synchronization 6: Concurrency: Deadlock and Starvation 7: Memory Management 8: Virtual memory 9: Uniprocessor Scheduling 10: Multiprocessor and Real-Time Scheduling 11: I/O Management and Disk Scheduling 12: File Management Student Presentations 4 P2: Tasking 6 P3: Jurassic Park 6 P4: Virtual Memory 6 P5: Scheduling 8 P6: FAT 6 BYU CS 345 Threads
Chapter 4 Learning Outcomes 10 Understand the distinction between process and thread. Describe the basic design issues for threads. Explain the difference between user-level threads and kernel-level threads. Explain how threads are managed. Describe the thread management facility in Windows. Describe the thread management facility in Solaris. Describe the thread management facility in Linux. BYU CS 345 Threads
Processes What is a Computer Process? 11 Traditionally, a process or task is considered an instance of a computer program that is being executed. A process contains Unit of allocation (resources) Unit of execution (context) Unit of external input (data) Of the above, which are independent of the others? Unit of allocation (resources) process Unit of execution (context) thread or lightweight process Can a thread exist without a process (and vice versa)? Can a process have more than one context? How would switching processes compare to switching threads? BYU CS 345 Threads
Processes Processes (Heavyweight) 12 What resources might be owned by a process? Code, memory, heap Tables (files, signals, semaphores, buffers, I/O, ) Privileges What is owned by a unit of execution (context)? CPU registers (PC, SR, SP), stack State How is information shared between processes? Messaging, sockets, IPC, shared memory, Pipes How would you describe inter-process communication? Expensive: need to context switch. Secure: one process cannot corrupt another process. BYU CS 345 Threads
Threads Threads (Lightweight) 13 What is a thread? A Thread is an independent program counter and stack operating within a process - sometimes called a lightweight process (LWP) Execution trace. Smallest unit of processing (context) that can be scheduled by an operating system What do all process threads have in common? Process resources Global variables Reduced overhead by sharing the resources of a process. Switching can happen more frequently and efficiently. How would you describe inter-thread communication? Cheap: can use process memory without needing a context switch. Not Secure: one thread can write to memory in use by another thread. BYU CS 345 Threads
Threads Examples of Threads/Processes 14 1:N (Java) one process one thread one process multiple threads 1:1 (DOS) multiple processes one thread per process multiple processes multiple threads per process M:N Hybrid Threading (Windows, Solaris) Multi-tasking w/1:1 (UNIX) BYU CS 345 Threads
Chapter 4 Learning Outcomes 15 Understand the distinction between process and thread. Describe the basic design issues for threads. Explain the difference between user-level threads and kernel-level threads. Explain how threads are managed. Describe the thread management facility in Windows. Describe the thread management facility in Solaris. Describe the thread management facility in Linux. BYU CS 345 Threads
Multi-threading 16 BYU CS 345 Threads
Monday, September 25, 2017 17 National Lobster Day Reminders Homework #2 due Friday 09/29 Project 2: Tasking due Thursday (10/05) Today Project 2, Steps 1 & 2 Chapter 4b: Threading Next Time Chapter 5: Mutual Exclusion BYU CS 345 Threads
http://syque.com/cstyle/contents.htm Tip #7: Toothbrushes vs. Combs 18 When an item is followed by a related group of items, there are two layout possibilities. The 'toothbrush' layout has a handle which sticks out a long way, with the bristles at the end. This makes the handle very visible and easy to find and read: struct CHAR_TYPE { unsigned int FontName; unsigned int Point; }; However, this causes tramping across the page towards the right margin. A list of such items may also result in uneven, less readable, indentation. A more common approach is to use the 'comb' layout, which has only a small handle, with the bristles in a standard position below it: struct CHAR_TYPE { unsigned int FontName; unsigned int Point; }; BYU CS 345 Threads
Step 1: Priority Queue 19 Create a priority queue typedef int16_t TID; typedef int16_t Priority; typedef uint16_t* PQueue; Write queue functions to add/delete elements // task ID // task priority // priority queue TID enQ(PQueue q, TID tid, Priority p); TID deQ(PQueue q, TID tid); -> tid >=0 -1 TID tid -1 THOROUGHLY TEST BEFORE PROCEEDING! q (# | pr1/tid1 | pr2/tid2 | ) find and delete tid from q return highest priority tid (if found and deleted from q) (if q empty or task not found) Priority/TID Priority/TID Priority/TID Priority/TID # of entries BYU CS 345 Threads
P2 - Tasking Step 2: Schedule w/Ready Queue 20 rq[5] rq[4] rq[3] rq[2] rq[1] rq[5] rq[4] rq[3] rq[2] rq[1] rq[0] rq[0] Create a ready priority queue 10 / 3 5 / 2 5 / 0 2 / 1 10 / 3 5 / 2 5 / 0 2 / 1 4 4 typedef uint16_t* PQ; // priority queue PQueue rq; rq = (int*)malloc(MAX_TASKS * sizeof(int)); rq[0] = 0; // ready queue // init ready queue Add new task to ready queue in createTask enQ(rq, tid, tcb[tid].priority); NOTE: priority = tcb[tid].priority Change scheduler() to deQ/enQ next task if ((tid = deQ(rq, -1)) >= 0) { enQ(rq, tid, tcb[tid].priority); } BYU CS 345 Threads
Threads Types of Threads 21 A thread consists of: a thread execution state (Running, Ready, etc.) a context (program counter, register set.) an execution stack. some per-tread static storage for local variables. access to the memory and resources of its process (shared with all other threads in that process.) OS resources (open files, signals, etc.) Thus, all of the threads of a process share the state and resources of the parent process (memory space and code section.) There are two types of threads: User-space (ULT) and Kernel-space (KLT). BYU CS 345 Threads
ULTs User-Level Threads 22 User-level threads avoid the kernel and are managed by the process. Often this is called "cooperative multitasking" where the task defines a set of routines that get "switched to" by manipulating the stack pointer. Typically each thread "gives-up" the CPU by calling an explicit switch, sending a signal or doing an operation that involves the switcher. A timer signal can force switching. User threads typically can switch faster than kernel threads [however, Linux kernel threads' switching is actually pretty close in performance]. BYU CS 345 Threads
ULTs User-Level Threads 23 Disadvantages. User-space threads have a problem that a single thread can monopolize the timeslice thus starving the other threads within the task. Also, it has no way of taking advantage of SMPs (Symmetric MultiProcessor systems, e.g. dual-/quad-Pentiums). Lastly, when a thread becomes I/O blocked, all other threads within the task lose the timeslice as well. Solutions/work arounds. Timeslice monopolization can be controlled with an external monitor that uses its own clock tick. Some SMPs can support user-space multithreading by firing up tasks on specified CPUs then starting the threads from there [this form of SMP threading seems tenuous, at best]. Some libraries solve the I/O blocking problem with special wrappers over system calls, or the task can be written for nonblocking I/O. BYU CS 345 Threads
KLTs Kernel-Level Threads 24 KLTs often are implemented in the kernel using several tables (each task gets a table of threads). The kernel schedules each thread within the timeslice of each process. There is a little more overhead with mode switching from user to kernel mode because of loading of larger contexts, but initial performance measures indicate a negligible increase in time. Advantages. Since the clocktick will determine the switching times, a task is less likely to hog the timeslice from the other threads within the task. I/O blocking is not a problem. If properly coded, the process automatically can take advantage of SMPs and will run incrementally faster with each added CPU. BYU CS 345 Threads
Thread Management User-Level and Kernel-Level Threads 25 BYU CS 345 Threads
Threads What are the Benefits of Threads? 26 Threads of a process share the instructions (code) and process context (data). Far less time to create/terminate. Switching between threads is faster than switching between processes. No memory management issues (segmentation, paging). Can enhance communication efficiency. Simplify the structure of a program. Examples Foreground/Background editing while checking spelling / grammar. Keep CPU busy (asynchronous Processing) spreadsheet updates, read one set of data while processing another set. Organization For a word processing program, may allow one thread for each file being edited. BYU CS 345 Threads
Threads Multi-threaded Process 27 Thread 1 Thread 2 Thread 3 Spawn additional threads Sync Sync (a) Task graph of a program (b) Thread structure of a task BYU CS 345 Threads
Threads Thread Design 28 Thread states: Ready ready to run. Running currently executing. Blocked Waiting for resources. Thread design considerations: Blocking generally, it is desirable that a thread can block without blocking the remaining threads in the process. Multi-threading allow the process to start two operations at once, each thread blocks on the appropriate event. Shared resources: Synchronization System calls Thread-safe subroutines Locks BYU CS 345 Threads
Reentrant vs Thread-safe 29 A reentrant function can be interrupted at any point during its execution and then safely (recursively) called again ("re-entered") before its previous invocations complete execution. A thread-safe function guarantees the function is atomic and can complete without fear of corrupted results. While POSIX stdio.h operations are generally thread-safe, functions that share resources are generally not. You can not guarantee that during a function (such as qsort) the output from one thread won't start half way through the output from another thread also using calling the function. Without some application-level locking, whatever is written by multiple threads could be interleaved. Use a pthread mutex (lock, unlock) functions to achieve larger-than- single-function-call atomic segments of code. BYU CS 224 Threads
Reentrant vs Thread-safe 30 int t; void swap(int *x, int *y) { int s = t; t = *x; *x = *y; *y = t; t = s; } Reentrant, Not thread-safe Global variable. int t; void swap(int *x, int *y) { t = *x; *x = *y; *y = t; } Not reentrant, Not thread-safe Changing global to local variable. Threads share global variable. __thread int t; void swap(int *x, int *y) { t = *x; *x = *y; *y = t; } Not reentrant, Thread-safe void swap(int *x, int *y) { int t; t = *x; *x = *y; *y = t; } Reentrant and Thread-safe Local to thread, but global w/in Thread. No globals everything on stack. BYU CS 224 Threads
Chapter 4 Learning Outcomes 31 Understand the distinction between process and thread. Describe the basic design issues for threads. Explain the difference between user-level threads and kernel-level threads. Explain how threads are managed. Describe the thread management facility in Windows. Describe the thread management facility in Solaris. Describe the thread management facility in Linux. BYU CS 345 Threads
Threads How do Threads Share Information? 32 Shared memory Message passing Communication Link Send(message) /Receive (message) Direct or indirect communication Synchronous or asynchronous communication Automatic or explicit buffering Synchronization Ports Marshalling RPC Sockets Pipes BYU CS 345 Threads
RPCs RPC 33 1. Client calls the client stub (stack). 2. Client stub packs (marshalls) parameters. 3. Client's OS sends message to server. 4. Server OS passes packets to server stub. 5. Server stub unpacks (unmarshalls) message. 6. Server stub calls the server procedure. 7. Reply traces in the reverse direction. A remote procedure call (RPC) is an inter- process communication that allows a computer program to cause a subroutine or procedure to execute in another address space (commonly on another computer on a shared network) without the programmer explicitly coding the details for this remote interaction. BYU CS 345 Threads
Thread Issues How are Threads Managed? 34 How should threads be scheduled compared to processes? Equal to processes Within the parent processes quantum How are threads implemented? kernel support (system calls) user level threads What about mutual exclusion? Process resources are shared Data coherency BYU CS 345 Threads
Thread Management Thread Management 35 Some implementations support both ULT and KLT threads. Can take advantage of each to the running task. Since Linux's kernel-space threads nearly perform as well as user-space, the only advantage of using user- threads would be the cooperative multitasking. OS system calls could each be written as a thread or OS could be single threaded. Advantages: Speed and Concurrency Disadvantages: Mutual exclusion and complexity BYU CS 345 Threads
Thread Management Thread Creation (Java) 36 final List list; unsorted list of objects // some sort // A Thread class for sorting a List in the background class Sorter extends Thread { List l; public Sorter(List l) { this.l = l; } public void run() { Collections.sort(l); } // Thread body } // constructor // Create a Sorter Thread Thread sorter new Sorter(list); // Start running the new thread (parent continues here) sorter.start(); System.out.println("I'm the original thread"); BYU CS 345 Threads
Thread Problems 37 In many other multithreaded OSs, threads are not processes merely parts of a parent task. Therefore, if a thread calls fork() s or execve()'s some external program, the whole task could be replaced. The POSIX 1c standard defines a thread calling fork() to duplicate only the calling thread in the new process; and an execve() from a thread would stop all threads of that process. Having two different implementations and schedulers for processes is a flaw that has perpetuated from implementation to implementation. Some multitasking OSs have opted not to support threads due to these problems (not to mention the effort needed to make the kernel and libraries 100% reentrant). For example, Windows NT opts not to support POSIX-compliant threads (Windows NT does support threads but they are not POSIX compliant). BYU CS 345 Threads
Thread Problems 38 Most people have a hard enough time understanding tasks. Chopped up tasks" or threads is difficult to envision. "What can be threaded in my app?". Deciding what to thread can be very laborious. Another problem is locking. All the nightmares about sharing, locking, deadlock, race conditions, etc. come vividly alive in threads. Processes don't usually have to deal with this, since most shared data is passed through pipes. Threads can share file handles, pipes, variables, signals, etc. Test and duplicate error conditions can cause more gray hair than a wayward child. BYU CS 345 Threads
C Threads Thread Review 39 How does a thread differ from a process? Resource ownership Smallest unit of processing that can be scheduled by an operating system What are the implications of having an independent program counter? Each thread has its own stack. Code and global data belong to the process and are shared among threads. Threads own local data. Thread state is defined by processor registers and the stack. BYU CS 345 Threads
Chapter 4 Learning Outcomes 40 Understand the distinction between process and thread. Describe the basic design issues for threads. Explain the difference between user-level threads and kernel-level threads. Explain how threads are managed. Describe the thread management facility in Windows. Describe the thread management facility in Solaris. Describe the thread management facility in Linux. BYU CS 345 Threads
POSIX Thread Support 41 A thread library provides the programmer with API s for creating and managing process threads. Historically, hardware vendors have implemented their own proprietary versions of threads - differed substantially from each other making it difficult for programmers to develop portable threaded applications. A standardized programming interface was required. For UNIX systems, this interface has been specified by the IEEE POSIX 1003.1c standard (1995). Implementations which adhere to this standard are referred to as POSIX threads, or Pthreads. Most hardware vendors now offer Pthreads in addition to their proprietary API's BYU CS 345 Threads
POSIX Thread Support 42 The three main thread libraries in use today are POSIX Pthreads, Windows, and Java. For POSIX Pthreads, any data declared globally (outside of any function) is shared among all threads belonging to the same process. Each POSIX thread has its own execution context, that is, a program counter (PC), status register (SR), a register set (r4- 415), and a stack (SP). Thus local variables are not shared among process threads. Pthreads are defined as a set of C language programming types and procedure calls. Vendors usually provide a Pthreads implementation in the form of a header/include file and a library, which you link with your program. BYU CS 345 Threads
POSIX pthread Management 43 intpthread_init(pthread_attr_t*); Init TCB s Start Threading. intpthread_create(pthread_t*, pthread_attr_t*, void* (*func)(void*), void*); Malloc a new stack Add thread to ready queue (tcbs). intpthread_join(pthread_t tid, void** return_value); Waits for thread to terminate. voidpthread_exit(void* return_value); Free thread stack Delete thread from ready queue (tcbs). intpthread_yield(void); Forces rescheduling of threads. BYU CS 345 Threads
POSIX Thread Support 44 Threaded applications offer potential performance gains: Overlapping CPU work with I/O - a program may have sections performing long I/O operation while other threads can perform CPU intensive work. Priority/real-time scheduling: tasks, which are more important, can be scheduled to supersede or interrupt lower priority tasks. Asynchronous event handling: tasks, which service events of indeterminate frequency and duration, can be interleaved. For example, a web server can both transfer data from previous requests and manage the arrival of new requests. Multi-threaded applications that work on a uniprocessor system take advantage of a multiprocessor system, without recompiling. In a single processor environment, threads still offer better CPU utilization during I/O, program modularization, and better user response times. BYU CS 345 Threads
Windows Thread Support 45 Windows NT opted not to support POSIX-compliant threads (Windows NT does support threads but they are not POSIX compliant). A process is an executing program A thread is the basic unit allocated processor time. A job object is a managable group of processes. A thread pool is a collection of worker threads. A fiber is unit of execution manually scheduled by application. Better programming model that POSIX threads. Simplicity of data types (Windows uses HANDLE). WaitForMultipleObjects functionality. Persistence of signals. (POSIX signals are lost if there is no conditional expression.) BYU CS 345 Threads
Linux Thread Support 46 Linux threads are a partial implementation of POSIX Threads. Superseded by the Native POSIX Thread Library (NPTL). Linux Threads has problems, mainly owing to the implementation, which used the clone system call to create a new process sharing the parent's address space. Distinct process identifiers cause problems for signal handling; signals SIGUSR1 and SIGUSR2 usurped for inter-thread coordination and can not be used by programs. Linux 1.3.56 introduced support for KLT s. On-going effort to refine and make the kernel more reentrant. With the introduction of 2.1.x, the memory space is being revised so that the kernel can access the user memory more quickly. BYU CS 345 Threads
Solaris Threads 47 Solaris threads and POSIX pthreads API s do not match exactly, although the information content is effectively the same. Solaris Threads (libthread) pthreads (libpthread) thr_create() thr_exit() thr_join() thr_yield() thr_self() thr_kill() thr_sigsetmask() thr_setprio() thr_getprio() thr_setconcurrency() thr_getconcurrency() thr_suspend() thr_continue() pthread_create() pthread_exit() pthread_join() sched_yield() POSIX.4 pthread_self() pthread_kill() pthread_sigmask() pthread_setschedparam() pthread_getschedparam() pthread_setconcurrency() pthread_getconcurrency() - - BYU CS 345 Threads
Review Questions 48 A thread of execution is the smallest sequence of programmed instructions that can be managed independently by a scheduler, which is typically a part of the operating system. 1. What is the difference between a process and a computer program? 2. What is a Thread? 3. What are the different types of Threads? 4. Why Multithreading? 5. What are possible Thread States? 6. How do Threads Share Information? 7. How are Threads managed? 8. How are ULT s created in C? A computer process is an instance of a program running in a computer. A computer program is a passive collection of instructions, while a process is the actual execution of those instructions. User-level threads (ULT) and Kernel-level threads (KLT). Most all, but NOT process block. Improved performance through parallelism. 1) Thread creation is much faster. 2) Context switching is much faster. 3) Threads can be terminated easily. 4) Communication between threads is faster. BYU CS 345 Threads
49 BYU CS 345 Threads