There often are too many activities to do in a dynamic situation--each with conflicting timeliness requirements (e.g., the simple special case of deadlines), and relative importance, and conflicting shared resource needs-plus the work required exceeds the capacity of all the available workers (e.g., computers, trucks, first responders). Decisions must be made to triage the activities-i.e., schedule, defer, or drop-to satisfy situational effectiveness as best as possible according to the stakeholders' criteria. In peoples' daily personal and professional lives, analysis and management of dynamically conflicting activities are dealt with informally and intuitively. But in many cases, trustworthy satisfactory operation requires explicit credible reasoning about the activities' contention and resolution. Depending on the situation and its timeframes, there are various analytical approaches to accomplishing this, from fields such as operations research, artificial intelligence, and job shop scheduling. Aids for human decision-making under uncertainties can help when many hours are available for it (e.g., certain logistics activities). At the other extreme, algorithms for real-time scheduling of computer tasks apply primarily only in the very simple niche case when available on-line decision-making time is measured in milliseconds but everything is static and known in advance. In between these two extremes, such as seconds to minutes, there is a need for formal or at least methodological concepts and techniques for specifying and analyzing-by man or machine-the general case of activity time constraints (of which deadlines are a special case) and relative importance, and satisfactory resolution of their dynamic resource overloads and conflicts. Dynamic systems are inherently non-deterministic, so analyzing and managing the activities and their contention is necessarily based on reasoning about predictability-popularly, using ordinary (frequentist) probability theory. In the most basic view, the approach is based on each task having a specification of its utility as a function of its completion time, and scheduling the tasks to maximize the accrued (e.g., sum of the) utilities they provide. The examples are drawn from its use in defense applications, but the approach is suitable for a wide range of other contexts as well. The approach can be implemented in computer software or hardware, or exploited in human processes.
Why should you attend :
This seminar is a brief introduction to an unconventional approach for doing that, which has been demonstrated to be superior to traditional approaches, such as priority based ones, under certain conditions. In the most basic view, the approach is based on each task having a specification of its utility as a function of its completion time, and scheduling the tasks to maximize the accrued (e.g., sum of the) utilities they provide. The examples are drawn from its use in defense applications, but the approach is suitable for a wide range of other contexts as well. The approach can be implemented in computer software or hardware, or exploited in human processes. More generally, even if not completely implemented at all, this approach offers significant value. That results from the insight into the application which is gained by establishing the shape and utilities of each task's time/utility function. Knowing that task scheduling (including predictability) will be based on satisficing according to a utility accrual algorithm forces careful thought and consideration of trade-offs-and thus an understanding of the task set that is much deeper than usually can be gained from other scheduling approaches, such as assigning priorities.
Who Will Benefit:
Programmers, and Managers at any level of system or Software Development Organizations
Staff and Management Responsible for Planning and Scheduling time-constrained activities in the Enterprise, Disaster Response, etc.
E. Douglas Jensen
Founder & Principal, Time-Critical Technologies
E. Douglas Jensen is internationally recognized as one of the original pioneers, leading visionaries, and accomplished engineers of real-time and distributed real-time systems. His seminal work led to what is believed to be the world's first deployed commercial product for distributed real-time computer control systems in 1975, and shortly thereafter he made important contributions to the first commercial distributed computing product for industrial process control. For eight years he was on the faculty of the Computer Science Department, and the Electrical and Computer Engineering Department, at Carnegie Mellon University. There he created and directed the one of the world's largest academic real-time research group, as well as teaching both software and hardware graduate level courses. Before and after CMU, he held senior technology leadership positions in several major computer companies. He continues to advance both the principles and the best practices of real-time systems by contributing to both research and product development. He publishes scholarly papers in prestigious professional society journals and conferences, over 150 as of 2013. He is also active in real-time standards organizations - e.g., he was a member of the team that wrote Sun's Real-Time Specification for Java (and wrote the Foreword for the book), began the Distributed Real-Time Specification for Java, and was the co-architect and co-author of the OMG Real-Time CORBA specification.
He is the founder of Time-Critical Technologies, Inc., which provides premiere consulting and related services for corporations and government agencies world-wide on real-time embedded systems. His services include architecture, engineering, design, implementation, technical and business development advise to corporate executives and Boards of Directors, courses, meeting organization and management, technical audits, proposal and report writing, expert witnessing, and more.