Research

Optym brings decades of research experience in transportation, scheduling, operations research and computer science to its offerings for clients.

 

Since our founding in 2000 by a leading academician, Optym has bridged the gap between the latest research and real-world practices.

Our company’s roots lie in the world of academia, where new technologies and methods are developed every day. The Optym team includes top researchers who have made significant contributions to the fields of operations research, computer science, scheduling and transportation engineering. Optym team members are high achievers with advanced degrees and have a passion for learning. Leveraging this collective wealth of knowledge, we create products that are at the cutting edge of technology and bring the most advanced methods and theories to our software projects and professional services.

The following textbooks and papers have been written by Optym team members. These publications reflect the wide variety of technical advances they have made over the years.

TEXTBOOKS

NetworkFlows

Network Flows: Theory, Algorithms, and Applications 

Ravindra K. Ahuja, Thomas L. Magnanti, and James Orlin
Prentice Hall, 1993, 850 pages

This comprehensive text and reference book brings together the classic and contemporary aspects of the field of network optimization – providing an integrative view of theory, algorithms, and applications. This book is extensively used by researchers and practitioners worldwide, and has also been adopted as a textbook at hundreds of universities across the globe. This book also received the Lanchester Prize in 1993, given to the best publication in operations research. The prize citation stated, “The authors have been among the leaders in this area. Now they have refined and integrated the most interesting and useful of the new results into a textbook that is admirable for its clear and elegant exposition, its completeness, its wealth of examples, and its extensive collection of suggested applications. This book is a well to which both students and researchers will return for instruction and inspiration.

Developing Spreadsheet-Based Decision Support Systems

Sandra D. Eksioglu, Michelle M. H. Seref, Ravindra K. Ahuja, and Wayne L. Winston
Dynamic Ideas, 900 pages

This book describes how to develop decision support systems using Excel and VBA for Excel and illustrates this development process through numerous case studies. The book contains three parts. The first part covers Excel essentials, and the second part describes the programming language Visual Basic for Applications (VBA) for Excel. The third part describes the applications of Excel and Excel VBA to develop case studies arising in different branches of sciences, engineering, and management. We also describe the development of spreadsheet-based case studies involving Access databases. The book contains an extensive set of review and hands-on exercises, and it provides a set of team projects that students can do to master the material. Additional details about the book are available at the website www.dssbooks.com/excel.

Developing Web-Enabled Decision Support Systems

Abhijit A. Pol and Ravindra K. Ahuja
Dynamic Ideas, 700 pages

This book explains how to build web-based decision support systems using databases. Databases are also all-pervasive and critical to the efficient functioning of any organization. A web-enabled decision support system using databases requires a database management system (DBMS), a programming language, and a mechanism for web-enabling. A DBMS is used to store, manage, and process the data, while a programming language is used to build graphical user interfaces (GUI), do complex data processing and presentation, and incorporate external optimization engines. Several different software packages can be used to build such a system. This book explains how to build systems with Microsoft Access as a DBMS, VB .NET for programming language, and ASP .NET scripting for web-enabling. Additional details about the books are available at the website  www.dssbooks.com/web.

 

RESEARCH PAPERS

A Very Large-Scale Neighborhood Search Algorithm for the Combined Through-Fleet-Assignment Model


R. K. Ahuja, J. Goodstein, A. Mukherjee, J.B. Orlin, and D. Sharma. INFORMS Journal on Computing 19. 416-428. 2007.

The fleet-assignment model (FAM) for an airline assigns fleet types to the set of flight legs that satisfies a variety of constraints and minimizes the cost of the assignment. A through connection at a station is a connection between an arrival flight and a departure flight at the station, both of which have the same fleet type assigned to them, which ensures that the same plane flies both legs. Typically, passengers are willing to pay a premium for through connections. The through-assignment model (TAM) identifies a set of profitable throughs between arrival and departure flights flown by the same fleet type at each station to maximize the through benefits. TAM is usually solved after obtaining the solution from FAM. In this sequential approach, TAM cannot change the fleeting to get a better through assignment, and FAM does not take into account the through benefits. The goal of the combined through-fleet-assignment model (ctFAM) is to come up with a fleeting and through assignment that achieves the maximum combined benefit of the integrated model. We give a mixed integer-programming (MIP) formulation of ctFAM that is too large to be solved to even near optimality within allowable time for the data obtained by a major U.S. airline. We thus focus on neighborhood search algorithms for solving ctFAM, in which we start with the solution obtained by the previous sequential approach (that is, solving FAM first, followed by TAM) and improve it successively. Our approach is based on generalizing the swap-based neighborhood search approach of Talluri (1996) for FAM, which proceeds by swapping the fleet assignment of two flight paths flown by two different plane types that originate and terminate at the same stations and the same times. An important feature of our approach is that the size of our neighborhood is very large; hence the suggested algorithm is in the category of very large-scale neighborhood _VLSN_ search algorithms. Another important feature of our approach is that we use integer programming to identify improved neighbors. We provide computational results that indicate that the neighborhood search approach for ctFAM provides substantial savings over the sequential approach of solving FAM and TAM.

A Neighborhood Search Algorithm for the Combined Through and Fleet Assignment Model with Time Windows

R. K. Ahuja, J. Liu, J.B. Orlin, J. Goodstein, and A. Mukherjee. Networks 44, 160-171. 2004.

The fleet assignment model (FAM) for an airline assigns fleet types to a set of flight legs that satisfies a variety of constraints and minimizes the cost of the assignment. The through assignment model matches inbound flight legs into a city with the outbound flight legs at the same city that are flown by the same plane types and creates through connections (that is, flight connections with one stopover). The combined through and fleet assignment model (ctFAM) integrates both the models into a single model and obtains optimal fleet and through assignments by varying both sets of decision variables simultaneously. In a recent article, Ahuja et al. (2001) proposed a very large-scale neighborhood (VLSN) search algorithm for the ctFAM. In this article, we generalize this approach to incorporate time windows. In the model considered in this article, each flight leg has a time window associated with its departure time. The objective is to determine fleet assignment, departure time of all flight legs, and through connections between flights to minimize the total cost of fleet assignment and through connections. We call this model ctFAM with time windows or ctFAM-TW. Allowing flexibility in the flight departure time creates greater opportunities for fleet assignment and through connections, and can reduce costs substantially. We describe the details of a VLSN search algorithm for ctFAM-TW and also present computational results of our algorithm on the data provided by a large U.S. airline.

Solving Multi-Criteria Through-Fleet Assignment Models

R. K. Ahuja, J. Liu, J. Goodstein, A. Mukherjee, J.B. Orlin, and D. Sharma. Operations Research in Space and Air, Edited by Tito A. Ciriani, Giorgio Fasano, Stefano Gliozzi, and Robert Tadei, Kluwer Academic Publishers, 233-256, 2003.

The airline industry has been a pioneer in using operations research techniques to solve complex business problems related to the schedule planning of the airline. Given a flight schedule, an airline’s schedule planning group needs to decide the itinerary of each aircraft and each crew member so that the total revenue minus the total operating costs is maximum and all the operational constraints are satisfied. The entire planning problem is too large to be solved to optimality as a single optimization problem using present day technology; hence, these problems are solved sequentially where the optimal solution of one problem becomes the input for the following problem. A sequential approach for solving such problems has an inherent drawback in that the solution at each stage does not take into account the considerations of subsequent stages. The ultimate goal in the optimization of schedule planning is in solving an integrated optimization problem that addresses the planning problem mentioned above as well as other downstream issues that affect the overall schedule quality. Our approach for integrating the models is to include additional objectives that take into account the downstream issues. We describe our efforts to incorporate two criteria in the Combined Through-Fleet Assignment Model in addition to the traditional cost criteria: (i) ground manpower costs, and (ii) crew costs. We use very large-scale neighborhood search techniques to determine good solutions to the multicriteria Combined Through-Fleet Assignment Model. This chapter is based on techniques developed by the authors for the single-criteria Combined Through-Fleet Assignment Model, which in turn is based on earlier work of Talluri. This chapter describes the algorithmic approaches developed and the computational results of these algorithms.

Very Large-Scale Neighborhood Search in Airline Fleet Scheduling

R. K. Ahuja and J.B. Orlin. SIAM News 35, 1-4, 2002.

VLSN search opens up many possibilities in the area of heuristic search. It can be advantageous for several reasons. Most immediately, for some problems searching a large neighborhood can be more effective than searching smaller neighborhoods. In some cases there simply is no small neighborhood that can be used for a neighborhood search. The VLSN search for airline fleet scheduling described in this article is one such case – no existing natural small neighborhood could lead to improved solutions. VLSN search adds flexibility and power to existing search techniques, especially when used in conjunction with those techniques. The VLSN search can also take advantage of sub problems that may be easier to solve. Human schedulers sometimes use computer algorithms as part of an iterative process to find improved solutions. In this case, the human is doing the scheduling, and the computer is offering suggested changes to the current solution. Neighborhood search is required to keep the solution stable from iteration to iteration, with no solution differing markedly from its predecessor. Finally, from the viewpoint of applied mathematics, VLSN search has intrinsic interest; it opens up a domain of interesting mathematical problems for which answers are needed.

Solving Linear Cost Dynamic Lot-Sizing Problems in O(n log n) Time

R. K. Ahuja and D.S. Hochbaum. Operations Research 56, 255-261, 2008.

In this paper, we study capacitated dynamic lot-sizing problems with or without backorders, under the assumption that production costs are linear; that is, there are no setup costs. These two dynamic lot-sizing problems (with or without backorders) are minimum-cost flow problems on an underlying network that possess a special structure. We show how the well-known successive shortest-path algorithm for the minimum-cost flow problem can be used to solve these problems in O(n2) time, where n is the number of periods in the dynamic lot-sizing problems, and how, with the use of dynamic trees, we can solve these problems in O(n log n) time. Our algorithm also extends to the dynamic lot-sizing problem with integer variables and convex production costs with running time O(n log n log U), where U is the largest demand value.

A Heuristic Approach to the Multi-Period Single-Sourcing Problem with Production and Inventory Capacities and Perishability Constraints

R. K. Ahuja, W. Huang, H.E. Romeijn, and D.R. Morales. INFORMS Journal on Computing 19, 14-26, 2007.

The multi-period single-sourcing problem that we address in this paper can be used as a tool for evaluating logistics network designs in a dynamic environment. We consider the assignment of retailers to facilities, taking into account the timing, location, and size of production and inventories, in the presence of various types of constraints. We formulate the problem as a nonlinear assignment problem and develop efficient algorithms for solving the capacitated lot-sizing subproblems that form the objective function of this formulation. We propose a greedy heuristic and prove that this heuristic is asymptotically optimal in a probabilistic sense when retailer demands share a common seasonality pattern. In addition, we develop an efficient implementation of the very-large-scale-neighborhood-search method that can be used to improve the greedy solution. We perform extensive tests on a set of randomly generated problem instances, and conclude that our approach produces very high quality solutions in limited time.

Exact and Heuristic Algorithms for the Weapon-Target Assignment Problem

R. K. Ahuja, A. Kumar, K.C. Jha, and J.B. Orlin. Operations Research 55, 1136-1146, 2007.

The weapon-target assignment (WTA) problem is a fundamental problem arising in defense-related applications of operations research. This problem consists of optimally assigning n weapons to m targets so that the total expected survival value of the targets after all the engagements is minimal. The WTA problem can be formulated as a nonlinear integer programming problem and is known to be NP-complete. No exact methods exist for the WTA problem that can solve even small-size problems (for example, with 20 weapons and 20 targets). Although several heuristic methods have been proposed to solve the WTA problem, due to the absence of exact methods, no estimates are available on the quality of solutions produced by such heuristics. In this paper, we suggest integer programming and network flow-based lower-bounding methods that we obtain using a branch-and-bound algorithm for the WTA problem. We also propose a network flow-based construction heuristic and a very large-scale neighborhood (VLSN) search algorithm. We present computational results of our algorithms, which indicate that we can solve moderately large instances (up to 80 weapons and 80 targets) of the WTA problem optimally and obtain almost optimal solutions of fairly large instances (up to 200 weapons and 200 targets) within a few seconds.

The Continuous-Time Single-Sourcing Problem with Capacity Expansion Opportunitie

W. Huang, H.E. Romeijn, and J. Geunes. Naval Research Logistics 52, 193-211, 2005.

This paper develops a new model for allocating demand from retailers (or customers) to a set of production/storage facilities. A producer manufactures a product in multiple production facilities and faces demand from a set of retailers. The objective is to decide which of the production facilities should satisfy each retailer’s demand, in order minimize total production, inventory holding, and assignment costs (where the latter may include, for instance, variable production costs and transportation costs). Demand occurs continuously in time at a deterministic rate at each retailer, while each production facility faces fixed-charge production costs and linear holding costs. We first consider an uncapacitated model, which we generalize to allow for production or storage capacities. We then explore situations with capacity expansion opportunities. Our solution approach employs a column generation procedure, as well as greedy and local improvement heuristic approaches. A broad class of randomly generated test problems demonstrates that these heuristics find high quality solutions for this large-scale cross-facility planning problem using a modest amount of computation time. © 2005 Wiley Periodicals, Inc. Naval Research Logistics, 2005.

A Multi-Exchange Heuristic for the Single-Source Capacitated Facility Location Problem

R. K. Ahuja, J.B. Orlin, S. Pallottino, M.P. Scaparra, and M.G. Scutella. Management Science 50, 749-760, 2004.

We present a very large-scale neighborhood (VLSN) search algorithm for the capacitated facility location problem with single-source constraints. The neighborhood structures are induced by customer multi-exchanges and by facility moves. We consider both traditional single-customer multi-exchanges, detected on a suitably defined customer improvement graph, and more innovative multi-customer multi-exchanges, detected on a facility improvement graph dynamically built through the use of a greedy scheme. Computational results for some benchmark instances are reported that demonstrate the effectiveness of the approach for solving large-scale problems. A further test on real data involving an Italian factory is also presented.

A Composite Very-Large-Scale Neighborhood Search Algorithm for the Vehicle Routing Problem

R. Agarwal, R. K. Ahuja, G. Laporte, and Z.J. Shen. Handbook of Scheduling: Algorithms, Models and Performance Analysis, Edited by J. Y-T. Leung, Chapman & Hall/CRC, 49-01 to 49-23, 2003.

This chapter proposes a new heuristic, called very large scale neighborhood (VLSN) search, for the classical vehicle routing problem. The heuristic is a descent mechanism that explores the solution space by performing a search based on very large neighborhoods. The search mechanism allows moves in which several customers and several routes are involved. The best neighbor is determined through the solution of a network flow problem on an auxiliary graph. Computational tests indicate that the proposed heuristic is competitive with the best local search methods.

Incremental Network Optimization: Theory and Algorithms

O. Seref, R.K. Ahuja, and J. Orlin. Operations Research  57, 586-594, 2009.

In an incremental optimization problem, we are given a feasible solution x0 of an optimization problem P, and we want to make an incremental change in x0 that will result in the greatest improvement in the objective function. In this paper, we study the incremental optimization versions of six well-known network problems. We present a strongly polynomial algorithm for the incremental minimum spanning tree problem. We show that the incremental minimum cost flow problem and the incremental maximum flow problem can be solved in polynomial time using Lagrangian relaxation. We consider two versions of the incremental minimum shortest path problem, where increments are measured via arc inclusions and arc exclusions. We present a strongly polynomial time solution for the arc inclusion version and show that the arc exclusion version is NP-complete. We show that the incremental minimum cut problem is NP-complete and that the incremental minimum assignment problem reduces to the minimum exact matching problem, for which a randomized polynomial time algorithm is known.

OR Models in Freight Railroad Industry

A. K. Nemani, R. K. Ahuja. Wiley Encyclopedia of Operations Research and Management Science 2011.

The transportation sector, especially the rail mode, is a very rich source of optimization problems and has been a primary focus of operations researchers. A plethora of articles dedicated to each transportation mode (rail, road, air, and water) can be found in the literature. Ironically, the industries are still using rules of thumb instead of mathematical models for most of the planning and scheduling processes. There are two main reasons for not using the operations research (OR) models: (i) the highly complex nature of the problems, and (ii) the huge gap between the academic (theoretical) and industrial (practical) approaches. With the steep growth in computational capabilities, we have started developing innovative algorithms by shifting the focus from finding optimal-but-impractical solutions to finding good-and-implementable solutions. We are developing state-of-the-art ideas combining exact approaches such as network flows and mixed integer programming, with heuristics and meta-heuristics such as the tabu search and the very large-scale neighborhood (VLSN) search. In this article, we provide an overview of the railroad planning and scheduling problems at the strategic, tactical, and operational levels. We describe the best algorithms successfully implemented at U.S. railroads, which have the potential of saving hundreds of millions of dollars annually.

Iterative Algorithms for the Curfew Planning Problem

S. Bog, A.K. Nemani, R. K. Ahuja. Journal of the Operational Research Society 62, 593-607, 2011.

The curfew planning problem is to design an annual timetable for railway track maintenance teams. Each team is capable of handling certain types of repairs and replacement jobs. The jobs are combined into a set of projects according to their locations and types. The timetable shows which project should be worked on by each team on a weekly basis throughout an entire year. Our objective is to design a schedule with minimum network disruption due to ongoing maintenance projects that require absolute curfew. Absolute curfew projects are those that cause complete closure of the rail traffic. To tackle this problem, we develop four optimization-based iterative algorithms. We also present very promising computational results obtained within a few minutes using data provided by a major North American railroad.

The Locomotive Routing Problem

B. Vaiyanathan, R. K. Ahuja, J. B. Orlin. Transportation Science 42, 492-507, 2008.

Given a schedule of trains, the locomotive planning (or scheduling) problem (LPP) is to determine the minimum cost assignment of locomotive types to trains that satisfies a number of business and operational constraints. Once this is done, the railroad has to determine the sequence of trains to which each locomotive is assigned by unit number so that it can be fueled and serviced as necessary. We refer to this problem as the locomotive routing problem (LRP). The LRP is a very large scale combinatorial optimization problem, and the general version that we consider has previously been unstudied and unsolved in the research literature. In this paper, we develop robust optimization methods to solve the LRP. There are two major sets of constraints that need to be satisfied by each locomotive route: (1) locomotive fueling constraints, which require that every unit visit a fueling station at least once for every F miles of travel, and (2) locomotive servicing constraints, which require that every unit visit a service station at least once for every S miles of travel. The output of the LPP is not directly implementable because the LPP does not consider these fueling and servicing constraints. The LRP considers these constraints, and its output is therefore implementable. We model the LRP by considering alternative fueling and servicing friendly train paths (or strings) between servicing stations on the network. We formulate the LRP as an integer programming problem on a suitably constructed space-time network and show that this problem is NP-Complete. This integer programming problem has millions of decision variables. We develop a fast aggregation-disaggregation based algorithm to solve the problem within a few minutes of computational time to near optimality. Finally, we present computational results and extensive case studies of our algorithms on real data provided by a major Class I U.S. railroad.

Real-Life Locomotive Planning: New Formulations and Computational Results

B. Vaiyanathan, R. K. Ahuja, J. Liu, and L.A. Shugart. Transportation Research 42, 147-168, 2008.

In this paper, we develop new formulations for the locomotive planning problem (LPP) which is one of the most important railroad optimization problems. The objective of the LPP is to assign a consist (a set of locomotives) to each train in a pre-planned train schedule so as to provide sufficient power to pull the trains from their origins to their respective destinations at minimal cost. This assignment plan should be repeatable every week. In an earlier paper, we developed a formulation for locomotive planning and proposed a novel two-phase solution approach using linear, integer, and network programming. However, that formulation did not incorporate all of the real-world constraints needed to generate a fully implementable solution. In this paper, we extend that approach on several dimensions by adding new constraints to the planning problem desired by locomotive directors, and by developing additional formulations necessary to transition solutions of our models to practice. We propose two formulations for this generalized LPP: consist formulation and hybrid formulation. Finally, we present detailed computational tests that demonstrate the efficacy of models and conduct case studies on a number of instances to obtain several insights.

New Approaches for Solving the Block-to-Train Assignment Problem

K.C. Jha, R. K. Ahuja, and G. Sahin. Networks 51, 48-62, 2008.

Railroad planning involves solving two optimization problems: (i) the blocking problem, which determines what blocks to make and how to route traffic over these blocks; and (ii) the train schedule design problem, which determines train origins, destinations, and routes. Once the blocking plan and train schedule have been obtained, the next step is to determine which trains should carry which blocks. This problem, known as the block-to-train assignment problem, is considered in this paper. We provide two formulations for this problem: an arc-based formulation and a path-based formulation. The latter is generally smaller than the former, and it can better handle practical constraints. We also propose exact and heuristic algorithms based on the path-based formulation. Our exact algorithm solves an integer programming formulation with CPLEX using both a priori generation and dynamic generation of paths. Our heuristic algorithms include a Lagrangian relaxation-based method as well as a greedy construction method. We present computational results of our algorithms using the data provided by a major U.S. railroad. We show that we can obtain an optimal solution of the block-to-train assignment problem within a few minutes of computational time, and can obtain heuristic solutions with 1-2% deviations from the optimal solutions within a few seconds.

Multicommodity Network Flow Approach to the Railroad Crew Scheduling Problem

B. Vaidyanathan, K.C. Jha, and R. K. Ahuja. IBM Journal of Research and Development 51, 325-344, 2007.

We present our solution to the crew-scheduling problem for North American railroads. (Crew scheduling in North America is very different from scheduling in Europe, where it has been well studied.) The crew-scheduling problem is to assign operators to scheduled trains over a time horizon at minimal cost while honoring operational and contractual requirements. Currently, decisions related to crew are made manually. We present our work developing a network-flow-based crew-optimization model that can be applied at the tactical, planning, and strategic levels of crew scheduling. Our network flow model maps the assignment of crews to trains as the flow of crews on an underlying network, where different crew types are modeled as different commodities in this network. We formulate the problem as an integer programming problem on this network, which allows it to be solved to optimality. We also develop several highly efficient algorithms using problem decomposition and relaxation techniques, in which we use the special structure of the underlying network model to obtain significant increases in speed. We present very promising computational results of our algorithms on the data provided by a major North American railroad. Our network flow model is likely to form a backbone for a decision-support system for crew scheduling.

Solving Real-Life Railroad Blocking Problems

R. K. Ahuja, K.C. Jha, and J. Liu. Interfaces 37, 404-419, 2007.

Each major U.S. railroad ships millions of cars over its network annually. To reduce the intermediate handlings of shipments as they travel over the railroad network, a set of shipments is classified (or grouped together) at a railroad yard to create a block. The railroad blocking problem is to identify this classification plan for all shipments at all yards in the network to minimize the total shipment cost, i.e., to create a blocking plan. The railroad blocking problem is a very large-scale, multicommodity, flow-network-design and routing problem with billions of decision variables. Its size and mathematical difficulty preclude solving it using any commercial software package. We developed an algorithm using an emerging technique known as very large-scale neighborhood (VLSN) search that is able to solve the problem to near optimality using one to two hours of computer time on a standard workstation computer. This algorithm can also handle a variety of practical and business constraints that are necessary for implementing a solution. When we applied this algorithm to the data that several railroads provided us, the computational results were excellent. Three Class I railroad companies (a Class I railroad, as defined by the Association of American Railroads, has an operating revenue exceeding $319.3 million) in the United States – CSX Transportation, Norfolk Southern Corporation, and Burlington Northern and Santa Fe Railway – used it in developing their operating plans. Two U.S. Class I railroads have also licensed it for regular use in developing their operating plans, and other railroads are showing considerable interest. We expect this algorithm to become an industry standard for freight railroads worldwide. In this paper, we outline our algorithm, show the computational results we received using real data, and describe areas for future research.

Optimal Network Configuration and Capacity Expansion of Railroads

J. Liu, R. K. Ahuja, and G. Sahin. Operational Research Society 59, 911-920, 2007.

Railroads ship individual cars according to blocking plans that route the cars in groups (blocks) that share common intermediate stops. An individual shipment is regrouped (reclassified) two to three times along the way from its origin to destination. Yards are crucial facilities of the rail network where cars are reclassified according to such blocking plans. Therefore, yard locations and the blocking plan impose the detour and classification of cars over the physical network. Yards are capacitated with respect to number of blocks made and number of cars classified; rail lines between major stations are capacitated with respect to number of cars that pass through. These restrictions are accounted for in designing the blocking plans. Changing the yard locations and expanding associated capacities may result in dramatic changes in blocking plans saving tens of millions of dollars in railroad transportation costs. We develop a mathematical programming formulation and propose solution methods for the yard location problem and the capacity expansion problems. We demonstrate that the railroads can save significantly by reconfiguring their networks.

A Simulation/Optimization Framework for Locomotive Planning

A. Nahapetyan, R. Ahuja, F.Z. Sargut, A. John, and K. Somani. ATMOS Workshop, 259-276, 2007.

In this paper, we give an overview of the Locomotive Simulator/Optimizer (LSO) decision support system developed by us for railroads. This software is designed to imitate locomotive movement across a rail network, and it simulates all four major components of the system (trains, locomotives, terminals, and shops) in an integrated framework. It includes about 20 charts that allow the evaluation of system performance using standard measures. LSO can be used by locomotive management to perform “what-if” analysis and evaluate system performance for different input data; it provides a safe environment for experimentation. We have tested the software on real data and output showed that the software closely imitates day-to-day operations. We have also performed different scenario analysis, and reports illustrate that the software correctly reflects input data changes.

Integer Programming Based Approaches for the Train Dispatching Problem

G. Sahin, R. K. Ahuja, and C.B. Cunha. 2006.

Railroads face the challenge of competing with the trucking industry in a fast-paced environment. In this respect, they are working toward running freight trains on schedule and reducing travel times. The planned train schedules consist of departure and arrival times at main stations on the rail network. However, this plan does not consider the conflicts that occur as the trains meet along the tracks between these main stations. A detailed timetable, on the other hand, consists of the departure and arrival times of each train in each track section of its route. The train dispatching problem aims to determine detailed timetables over a rail network in order to minimize deviations from the planned schedule. In this paper, we provide a new integer programming formulation for this problem based on a space-time network; we propose heuristic algorithms to solve it and present computational results of these algorithms. Our approach includes some realistic constraints that have not been previously considered as well as all the assumptions and practical issues considered by the earlier works.

Network Models in Railroad Planning and Scheduling

R. K. Ahuja, C.B. Cunha, and G. Sahin. Tutorials in Operations Research 1, 54-101, 2005.

The past few decades have witnessed numerous applications of operations research in logistics, and these applications have resulted in substantial cost savings. However, the U.S. railroad industry has not benefited from the advances, and most of the planning and scheduling processes do not use modeling and optimization. Indeed, most of the planning and scheduling problems arising in railroads, which involve billions of dollars of resources annually, are currently being solved manually. The main reason for not using OR models and methodologies is the mathematical difficulty of these problems, which prevented the development of decision tools that railroads can use to obtain implementable solutions. However, now this situation is gradually changing. We are developing cutting-edge operations research algorithms, by using state-of-the-art ideas from linear and integer programming, network flows, discrete optimization, heuristics, and very large-scale neighborhood (VLSN) search, that railroads have already started using and from which they have started deriving immense benefits. This chapter gives an overview of the railroad planning and scheduling problems, including the railroad blocking problem, train scheduling problem, yard location problem, train dispatching problem, locomotive scheduling problem, and crew scheduling problem. Some of these problems are very large-scale integer programming problems containing billions or even trillions of integer variables. We will describe algorithms that can solve these problems to near-optimality within one to two hours of computational time. We present computational results of these algorithms on the data provided by several U.S. railroads, demonstrating potential benefits from tens to hundreds of millions annually.

Solving Real-Life Locomotive Scheduling Problems

R. K. Ahuja, J. Liu, J.B. Orlin, D. Sharma, and L.A. Shugart. Transportation Science 39, 503-517, 2005.

In the locomotive-scheduling problem (or the locomotive-assignment problem), we must assign a consist (a set of locomotives) to each train in a preplanned train schedule so as to provide each train with sufficient locomotive power to pull the train from its origin to its destination. Locomotive-scheduling problems are among the most important problems in railroad scheduling. In this paper, we report the results of a study on the locomotive-scheduling problem as it is faced by CSX Transportation, a major U.S. railroad company. We consider the planning version of the locomotive-scheduling model (LSM) in which multiple types of locomotives exist, and we need to decide which set of locomotives should be assigned to each train. We present an integrated model that determines the set of active and deadheaded locomotives for each train; the light-traveling locomotives from power sources to power sinks; and train-to-train connections (for which we specify which inbound trains and outbound trains can directly connect). An important feature of our model is that we explicitly consider consist bustings and consistency. A consist is said to be busted when a set of locomotives coming on an inbound train is broken into subsets to be reassigned to two or more outbound trains. A solution is consistent over a week if a train receives the same locomotive assignment each day that it runs. We will provide a mixed integer programming (MIP) formulation of the locomotive-assignment problem. However, an MIP of this size cannot be solved to optimality or near optimality in acceptable running times using commercially available software. Using problem decomposition, integer programming, and very large-scale neighborhood search, we have developed a solution technique to solve this problem within 30 minutes of computation time on a Pentium III computer. Our solution obtained a potential savings of over 400 locomotives over the solution obtained by the in-house software developed by CSX.

Very Large-Scale Neighborhood Search for the Quadratic Assignment Problem

R.K. Ahuja, K.C. Jha, J.B. Orlin, and D. Sharma. INFORMS Journal on Computing 19, 646-657, 2007.

The quadratic assignment problem (QAP) consists of assigning n facilities to n locations to minimize the total weighted cost of interactions between facilities. The QAP arises in many diverse settings, is known to be NP-hard and can be solved to optimality only for fairly small instances (typically, n = 30). Neighborhood search algorithms are the most popular heuristic algorithms for solving larger instances of the QAP. The most extensively applied neighborhood structure for the QAP is the 2-exchange neighborhood. This neighborhood is obtained by swapping the locations of two facilities; its size is therefore O(n2). Previous efforts to explore larger neighborhoods (such as 3-exchange or 4-exchange neighborhoods) were not very successful, as it took too long to evaluate the larger set of neighbors. In this paper, we propose very large-scale neighborhood (VLSN) search algorithms when the size of the neighborhood is very large, and we propose a novel search procedure to enumerate good neighbors heuristically. Our search procedure relies on the concept of an improvement graph that allows us to evaluate neighbors much faster than existing methods. In this paper, we present extensive computational results of our algorithms when applied to standard benchmark instances.

A Cut-Based Algorithm for the Nonlinear Dual of the Minimum Cost Network Flow Problem

R.K. Ahuja, D.S. Hochbaum, and J.B. Orlin. Algorithmica 39, 189-208, 2004.

We consider a convex, or nonlinear, separable minimization problem with constraints that are dual to the minimum cost network flow problem. We show how to reduce this problem to a polynomial number of minimum s,t-cut problems. The solution of the reduced problem utilizes the technique for solving integer programs on monotone inequalities in three variables, and a so-called proximity-scaling technique that reduces a convex problem to its linear objective counterpart. The problem is solved in this case in a logarithmic number of calls, O(logU), to a minimum cut procedure, where U is the range of the variables. For a convex problem on n variables the minimum cut is solved on a graph with O(n2) nodes. Among the consequences of this result is a new cut-based scaling algorithm for the minimum cost network flow problem. When the objective function is an arbitrary nonlinear function we demonstrate that this constrained problem is solved in pseudopolynomial time by applying a minimum cut procedure to a graph on O(nU) nodes.

Solving the Convex Cost Integer Dual Network Flow Problem

R.K. Ahuja, D.S. Hochbaum, and J.B. Orlin. Management Science 49, 950-964, 2003.

In this paper, we consider an integer convex optimization problem where the objective function is the sum of separable convex functions (that is, of the form &Sigma (i,j) &isin Q Fij(wij) + &Sigma i &isin P Bi (&mui)), the constraints are similar to those arising in the dual of a minimum cost flow problem (that is, of the form &mui – &muj = wij , (i, j) e Q), with lower and upper bounds on variables. Let n = |P|, m = |Q|, and U be the largest magnitude in the lower and upper bounds of variables. We call this problem the convex cost integer dual network flow problem. In this paper, we describe several applications of the convex cost integer dual network flow problem arising in a dial-a-ride transit problem, inverse spanning tree problem, project management, and regression analysis. We develop network flow-based algorithms to solve the convex cost integer dual network flow problem. We show that using the Lagrangian relaxation technique, the convex cost integer dual network flow problem can be transformed into a convex cost primal network flow problem where each cost function is a piecewise linear convex function with integer slopes. Its special structure allows the convex cost primal network flow problem to be solved in O(nm log(n2/m) log(nU)) time. This time bound is the same time needed to solve the minimum cost flow problem using the cost-scaling algorithm and is also is best available time bound to solve the convex cost integer dual network flow problem.

Minimum Time and Minimum Cost-Path Problems in Street Networks with Periodic Traffic Lights

R.K. Ahuja, J.B. Orlin, S. Pallottino, and M.G. Scutella. Transportation Science 36, 326-336, 2002.

This paper investigates minimum time and minimum cost path problems in street networks regulated by periodic traffic lights. We show that the minimum time path problem is polynomially solvable. On the other hand, minimum cost path problems are generally NP-hard. Special, realistic cases which are polynomially solvable are discussed.

A Survey of Very Large-Scale Neighborhood Search Techniques

R.K. Ahuja, O. Ergun, J.B. Orlin, and A.P. Punnen. Discrete Applied Mathematics 123, 75-102, 2002.

Many optimization problems of practical interest are computationally intractable. Therefore, a practical approach for solving such problems is to employ heuristic (approximation) algorithms that can 6nd nearly optimal solutions within a reasonable amount of computation time. An improvement algorithm is a heuristic algorithm that generally starts with a feasible solution and iteratively tries to obtain a better solution. Neighborhood search algorithms (alternatively called local search algorithms) are a wide class of improvement algorithms where at each iteration an improving solution is found by searching the “neighborhood” of the current solution. A critical issue in the design of a neighborhood search algorithm is the choice of the neighborhood structure, that is, the manner in which the neighborhood is defined. As a rule of thumb, the larger the neighborhood, the better is the quality of the locally optimal solutions, and the greater is the accuracy of the 6nal solution that is obtained. At the same time, the larger the neighborhood, the longer it takes to search the neighborhood at each iteration. For this reason, a larger neighborhood does not necessarily produce a more effective heuristic unless one can search the larger neighborhood in a very efficient manner. This paper concentrates on neighborhood search algorithms where the size of the neighborhood is “very large” with respect to the size of the input data and in which the neighborhood is searched in an efficient manner. We survey three broad classes of very large-scale neighborhood search (VLSN) algorithms: (1) variable-depth methods in which large neighborhoods are searched heuristically, (2) large neighborhoods in which the neighborhoods are searched using network Bow techniques or dynamic programming, and (3) large neighborhoods induced by restrictions of the original problem that are solvable in polynomial time.

Turing Completeness of Functional Programming Langages with on Set of Built-In Functions

G. Martirosyan. Proceedings of the Conference on Computer Sciences and Information Technologies (CSIT-2011), 370-373, 2011.

Many functional programming languages operate on S-expressions. The sets of built-in functions of those languages contain car, cdr, cons, atom, eq, if_then_else functions. The aim of the paper is to show that Turing computable functions defined on S-expressions can be presented in such functional programming languages which use car, cdr, cons, atom, eq, if_then_else built-in functions. In other words, if the set of built-in constants of a functional programming language contains all these functions, then that language is Turing complete.

Speeding up Network Layout and Centrality Measures with NodeXL and the Nvidia CUDA Technology

P. Sharma, U. Khurana, B. Shneiderman, M. Scharrenbroich, and J. Locke. 2011 International Conference on Social Computing, Behavioral-Cultural Modeling, & Prediction (SBP11), 2011.

In this paper we talk about speeding up calculation of graph metrics and layout with NodeXL by exploiting the parallel architecture of modern day graphics processing units (GPU), specifically Compute Unified Device Architecture (CUDA) by Nvidia. Graph centrality metrics like eigenvector, betweenness, page rank and layout algorithms like Fruchterman-Rheingold are essential components of social network analysis (SNA). With the growth in adoption of SNA in different domains and increasing availability of huge networked datasets for analysis, social network analysts are looking for tools that are faster and more scalable. Our results show up to 802 times speedup for a Fruchterman-Rheingold graph layout and up to 17,972 times speedup for eigenvector centrality metric calculations.

TreeCovery: Coordinated Dual Treemap Visualization for Exploring the Recovery Act

M. Rios-Berrios, P. Sharma, T. Lee, R. Schwartz, B. Shneiderman. Government Information Quarterly, 2011.

The American Recovery and Reinvestment Act dedicated $787 billion to stimulate the U.S. economy and mandated the release of the data describing the exact distribution of that money. The dataset is a large and complex one; one of its distinguishing features is its bi-hierarchical structure, arising from the distribution of money through agencies to specific projects and the natural aggregation of awards based on location. To offer a comprehensive overview of the data, a visualization must incorporate both these hierarchies. We present TreeCovery, a tool that accomplishes this through the use of two coordinated treemaps. The tool includes a number of innovative features, including coordinated zooming and filtering and a proportional highlighting technique across the two trees. TreeCovery was designed to facilitate data exploration, and initial user studies suggest that it will be helpful in insight generation. RATB (Recovery Accountability and Transparency Board) has tested TreeCovery and considering to include the concept into their visual analytics.

Innovation Trajectories for Information Visualization: Comparing Treemaps, Cone Trees, and Hyperbolic Trees

B. Shniederman, C. Dunne, P. Sharma, and P. Wang. Information Visualization 2010.

This paper reviews the trajectory of three information visualization innovations: treemaps, conetrees, and hyperbolic trees. These three ideas were first published in the early 1990s, so we are able to track academic publications, patents, and trade press articles over almost two decades. We describe the early history of each approach, problems with data collection from differing sources, appropriate metrics, and strategies for visualizing these longitudinal data sets. This paper makes two contributions: (1) it offers the information visualization community a history of how certain ideas evolved, influenced others, and were adopted for widespread use, and (2) it provides example of how such trajectories of historical trends can be gathered and visualized. Guidance for innovators and future analysts is offered.

Encounter-based worms: Analysis and Defense

S. Tanachaiwiwat and A. Helmy. Ad Hoc Networks, Elsevier Journal, 2009.

Encounter-based network is a frequently-disconnected wireless ad-hoc network requiring immediate neighbors to store and forward aggregated data for information disseminations. Using traditional approaches such as gateways or firewalls for deterring worm propagation in encounter-based networks is inappropriate. We propose the worm interaction approach that relies upon automated beneficial worm generation aiming to alleviate problems of worm propagations in such networks. To understand the dynamic of worm interactions and its performance, we mathematically model worm interactions based on major worm interaction factors including worm interaction types, network characteristics, and node characteristics using ordinary differential equations and analyze their effects on our proposed metrics. We validate our proposed model using extensive synthetic and trace-driven simulations. We find that all worm interaction factors significantly affect the pattern of worm propagations. For example, immunization linearly decreases the infection of susceptible nodes while on/off behavior only impacts the duration of infection. Using realistic mobile network measurements, we find that encounters are “bursty,” multi-group and non-uniform. The trends from the trace-driven simulations are consistent with the model, in general. Immunization and timely deployment seem to be the most effective to counter the worm attacks in such scenarios while cooperation may help in a specific case. These findings provide insight that we hope would aid to develop counter-worm protocols in future encounter-based networks.

Enhancing Efficiency of Dynamic Threat Analysis for Combating and Competing Systems

A. Javadyan, E. Pogossian, and E. Ivanyan. NATO Security through Science Series 8. 85-91, 2007.

In this paper, we study the class of problems where Solutions Spaces are specified by combinatorial Game Trees. A version of Botvinnik’s Intermediate Goals At First (IGAF) algorithm is developed for strategy formation based on common knowledge planning and dynamic plan testing in the corresponding game tree. The algorithm (1) includes a range of knowledge types in the form of goals and rules, (2) demonstrates a strong tendency to increase strategy formation efficiency, and (3) increases the amount of knowledge available to the system.

Modeling and Analysis of Worm Interactions (War of the Worms)

S. Tanachaiwiwat and A. Helmy. IEEE Broadnets 2007 Fourth International Conference on Broadband Communications, Networks, and Systems, 2007.

“War of the worms” is a war between opposing computer worms, creating complex worm interactions as well as detrimental impact on infrastructure. For example, in September 2003 the Welchia worms were launched to terminate the Blaster worms and patch the vulnerable hosts. In this paper, we try to answer the following questions: How can we explain the dynamic of such phenomena with a simple mathematical model? How can one worm win this war? How do other factors such as locality preference, bandwidth, worm replication size, and reaction time affect the number of infected hosts? We propose a new Worm Interaction Model (based upon and extending beyond the epidemic model) focusing on random-scan worm interactions. We also propose a new set of metrics to quantify effectiveness of one worm terminating other worm. We validate our worm interaction model using extensive ns-2 simulations. This study provides the first work to characterize and investigate worm interactions of random-scan worms in multihop networks.

On the Performance Evaluation of Encounter-based Worm Interactions Based on Node Characterisics

S. Tanachaiwiwat and A. Helmy. ACM Mobicom 2007 Workshop on Challenged Networks, 2007.

An encounter-based network is a frequently-disconnected wireless ad-hoc network requiring nearby neighbors to store and forward data utilizing mobility and encounters over time. Using traditional approaches such as gateways or firewalls for deterring worm propagation in encounter-based networks is inappropriate. We propose models for the worm interaction approach that relies upon automated beneficial worm generation to alleviate problems of worm propagation in such networks. We study and analyze the impact of key mobile node characteristics including node cooperation, immunization, on-off behavior on the worm propagations and interactions. We validate our proposed model using extensive simulations. We also find that, in addition to immunization, cooperation can reduce the level of worm infection. Furthermore, on-off behavior linearly impacts only timing aspect but not the overall infection. Using realistic mobile network measurements, we find that encounters are non-uniform, the trends are consistent with the model but the magnitudes are drastically different. Immunization seems to be the most effective in such scenarios. These findings provide insight that we hope would aid to develop counter-worm protocols in future encounter-based networks.

Effective Discovery of Intrusion Protection Strategies

A. Javadyan, E. Pogossian, and E. Ivanyan. Springers. 263-276, 2005.

In this paper, effectiveness of discovery of strategy knowledge is studied for problems where the space of hypothesis of solutions is specified by game trees and target solutions are discovered by methods capable of systematic acquisition of expert knowledge about them. A version of Botvinnik’s Intermediate Goals At First algorithm is developed for strategy formation based on common knowledge planning and dynamic testing of the plans in the corresponding game tree. Applied to the intrusion protection problem, the algorithm for a range of types of knowledge in form of goals and rules demonstrates strong tendency to increasing the efficiency of strategy formation with an increase in the amount of knowledge available to the system.

Correlation Analysis for Alleviating Effects of Inserted Data in Wireless Sensor Networks

S. Tanachaiwiwat and A. Helmy. Mobigquitous ’05, 2005.

This paper introduces a new approach that addresses data contamination problems from attacks in unattended wireless sensor networks. We propose a sliding-window based spatio-temporal correlation analysis called “Abnormal Relationships Test (ART)” to effectively detect, respond, and immune to inserted spoofed data from both various-ID impersonators and compromised nodes. Also a systematic approach is given to identify the appropriate sliding window size and correlation coefficient threshold. Our study shows that correlation property of observed phenomenon is not always transitive, different phenomenon from same set of nodes at the same or different period of time can have different correlation coefficients. Our simulation results reveal interesting relationships of outlier percentage and correlation coefficient. With proper parameter setting, ART achieves high attack detection rate (90% for correlated attacks and 94% for random attacks even with 100% data insertion).

Location-centric Isolation of Misbehavior and Trust Routing in Energy-constrained Sensor Networks

S. Tanachaiwiwat, D. Pinalkumar, R. Bhindwale, and A. Helmy. Workshop on Energy-Efficient Wireless Communications and Networks (EWCN 04) 463-469, 2004.

In sensor networks, a large number of distributed sensors collaborate to deliver information to the sinks. Such a scenario assumes trust between sensor nodes. However, sensors may fail or be compromised (in military operations) in a way that renders them misbehaving. In this work we target a misbehavior model in which a misbehaving node participates in routing signaling while consistently dropping queries and data packets. We target static sensor networks in which geographic routing is used. We identify and study the route infection effect in which one misbehaving node may block the path to many nodes in a sensor network.

Neighborhood Search Approaches to Beam Orientation Optimization in Intensity Modulated Radiation Therapy Treatment Planning

D.M. Aleman, A. Kumar, R.K. Ahuja, H.E. Romeijn, and J.F. Dempsey. Journal of Global Optimization 42, 769-784, 2008.

The intensity modulated radiation therapy (IMRT) treatment planning problem consists of several subproblems, which are typically solved sequentially. We seek to combine two of the subproblems: the beam orientation optimization (BOO) problem and the fluence map optimization (FMO) problem. The BOO problem is the problem of selecting the beam orientations to deliver radiation to the patient. The FMO problem is the problem of determining the amount of radiation intensity, or fluence, of each beamlet in each beam. The solution to the FMO problem measures the quality of a beam set, but the majority of previous BOO studies rely on heuristics and approximations to gauge the quality of the beam set. In contrast with these studies, we use an exact measure of the treatment plan quality attainable using a given beam set, which ensures convergence to a global optimum in the case of our simulated annealing algorithm and a local optimum in the case of our local search algorithm. We have also developed a new neighborhood structure that allows for faster convergence using our simulated annealing and local search algorithms, thus reducing the amount of time required to obtain a good solution. Finally, we show empirically that we can generate clinically.

A New Linear Programming Approach to Radiation Therapy Treatment Planning Problems

H.E. Romeijn, R.K. Ahuja, J.F. Dempsey, and A. Kumar. Operations Research 54, 201-216, 2006.

We consider the problem of radiation therapy treatment planning for cancer patients. During radiation therapy, beams of radiation pass through a patient, killing both cancerous and normal cells. Thus, the radiation therapy must be carefully planned so that a clinically prescribed dose is delivered to targets containing cancerous cells, while nearby organs and tissues are spared. Currently, a technique called intensity-modulated radiation therapy (IMRT) is considered to be the most effective radiation therapy for many forms of cancer. In IMRT, the patient is irradiated from several beams, each of which is decomposed into hundreds of small beamlets, the intensities of which can be controlled individually. In this paper, we consider the problem of designing a treatment plan for IMRT when the orientations of the beams are given. We propose a new model that has the potential to achieve most of the goals with respect to the quality of a treatment plan that have been considered to date. However, in contrast with established mixed-integer and nonlinear programming formulations, we do so while retaining linearity of the optimization problem, which substantially improves the tractability of the optimization problem. Furthermore, we discuss how several additional quality and practical aspects of the problem that have been ignored to date can be incorporated into our linear model. We demonstrate the effectiveness of our approach on clinical data.

A Network Flow Algorithm to Minimize Beam-On Time for Unconstrained Multileaf Collimator Problems in Cancer Radiation Therapy

R.K. Ahuja and H.W. Hamacher. Networks 45, 36-41, 2004.

In this article, we study the modulation of intensity matrices arising in cancer radiation therapy using multileaf collimators. This problem can be formulated by decomposing a given m x n integer matrix into a positive linear combination of (0, 1) matrices with the strict consecutive 1’s property in rows. We consider a special case in which no technical constraints have to be taken into account. In this situation, the rows of the intensity matrix are independent of each other and the problem is equivalent to decomposing m intensity rows – independent of each other – into positive linear combinations of (0, 1) rows with the consecutive 1’s property. We demonstrate that this problem can be transformed into a minimum cost flow problem in a directed network that has the following special structures: (1) the network is acyclic; (2) it is a complete graph (that is, there is an arc (i, j) whenever i < j); (3) each arc cost is 1; and, (4) each arc is uncapacitated (that is, it has infinite capacity). We show that using this special structure, the minimum cost flow problem can be solved in O(n) time. Because we need to solve m such problems, the total running time of our algorithm is O(nm), which is an optimal algorithm to decompose a given m x n integer matrix into a positive linear combination of (0, 1) matrices.

A Column Generation Approach to Radiation Therapy Treatment Planning Using Aperture Modulation

H.E. Romeijn, R.K. Ahuja, J.F. Dempsey, and A. Kumar. SIAM Journal on Optimization 15, 838-862, 2005.

This paper considers the problem of radiation therapy treatment planning for cancer patients. During radiation therapy, beams of radiation pass through a patient. This radiation kills both cancerous and normal cells, so the radiation therapy must be carefully planned to deliver a clinically prescribed dose to certain targets while sparing nearby organs and tissues. Currently, a technique called intensity modulated radiation therapy (IMRT) is considered to be the most effective radiation therapy for many forms of cancer. In IMRT, the patient is irradiated from several different directions. From each direction, one or more irregularly shaped radiation beams of uniform intensity are used to deliver the treatment. This paper deals with the problem of designing a treatment plan for IMRT that determines an optimal set of such shapes (called apertures) and their corresponding intensities. This is in contrast with established two-stage approaches where, in the first phase, each radiation beam is viewed as consisting of a set of individual beamlets, each with its own intensity. A second phase is then needed to approximate and decompose the optimal intensity profile into a set of apertures with corresponding intensities. The problem is formulated as a large-scale convex programming problem, and a column generation approach to deal with its dimensionality is developed. The associated pricing problem determines, in each iteration, one or more apertures to be added to our problem. Several variants of this pricing problem are discussed, each corresponding to a particular set of constraints that the apertures must satisfy in one or more of the currently available types of commercial IMRT equipment. Polynomial-time algorithms are presented for solving each of these variants of the pricing problem to optimality. Finally, the effectiveness of our approach is demonstrated on clinical data.

A Novel Linear Programming Approach to Fluence Map Optimization for Intensity Modulated Radiation Therapy Treatment Planning

H.E. Romeijn, R.K. Ahuja, J.F. Dempsey, A. Kumar, and J.G. Li. Physics in Medicine and Biology 48, 3521-3542, 2003.

We present a novel linear programming (LP) based approach for efficiently solving the intensity modulated radiation therapy (IMRT) fluence-map optimization (FMO) problem to global optimality. Our model overcomes the apparent limitations of a linear-programming approach by approximating any convex objective function by a piecewise linear convex function. This approach allows us to retain the flexibility offered by general convex objective functions while allowing us to formulate the FMO problem as a LP problem. In addition, a novel type of partial-volume constraint that bounds the tail averages of the differential dose-volume histograms of structures is imposed while retaining linearity as an alternative approach to improve dose homogeneity in the target volumes, and to attempt to spare as many critical structures as possible. The goal of this work is to develop a very rapid global optimization approach that finds high quality dose distributions. Implementation of this model has demonstrated excellent results. We found globally optimal solutions for eight 7-beam head-and-neck cases in less than 3 minutes of computational time on a single processor personal computer without the use of partial-volume constraints. Adding such constraints increased the running times by a factor of 2-3, but improved the sparing of critical structures. All cases demonstrated excellent target coverage (>95%), target homogeneity (<10% overdosing and <7% underdosing) and organ sparing using at least one of the two models.

Strassen’s Matrix Multiplication on GPUs

J. Li, S. Ranka and S. Sahni. IEEE 17th International Conference on Parallel and Distributed Systems (ICPADS). 2011

We provide efficient single-precision and integer GPU implementations of Strassen’s algorithm as well as of Winograd’s variant. On an NVIDIA C1060 GPU, a speedup of 32% (35%) is obtained for Strassen’s 4-level implementation and 33% (36%) for Winograd’s variant relative to the sgemm (integer version of sgemm) code in CUBLAS 3.0 when multiplying 16384 x 16384 matrices. The maximum numerical error for the single-precision implementations is about 2 orders of magnitude higher than those for sgemm when n = 16384 and is zero for the integer implementations.

Pairwise Sequence Alignment for Very Long Sequences on GPUs

J. Li, S. Ranka and S. Sahni. IEEE 2nd International Conference on Computational Advances in Bio and Medical Sciences (ICCABS). 2012. [Extended version in International Journal of Bioinformatics Research and Applications 2 10.4-5 (2014): 345-368]

We develop novel single-GPU parallelizations of the Smith-Waterman algorithm for pairwise sequence alignment. Our algorithms, which are suitable for the alignment of a single pair of very long sequences, can be used to determine the alignment score as well as the actual alignment. Experimental results demonstrate an order of magnitude reduction in run time relative to competing GPU algorithms.

Parallel Syntenic Alignment on GPUs

J. Li, S. Ranka, and S. Sahni. ACM Conference on Bioinformatics, Computational Biology and Biomedicine (ACM-BCB). 2012. [Extended version in BMC bioinformatics 15.Suppl 8 (2014): S1.]

We develop novel single-GPU parallel algorithms for syntenic alignment using CUDA. Our algorithms can be used to determine the optimal alignment score as well as the actual optimal alignment for a single pair of sequences. Experimental results show that our best GPU alignment algorithm achieves a speedup of approximately 28 over the sequential algorithm.

Multicore and GPU Algorithms for Nussinov RNA Folding

J. Li, S. Ranka, and S. Sahni. IEEE 3rd International Conference on Computational Advances in Bio and Medical Sciences (ICCABS). 2013.

We develop cache efficient, multicore, and GPU algorithms for RNA folding using Nussinov’s equations. Our cache efficient algorithm provides a speedup between 1.6 and 3.0 relative to a naive straightforward single core code. The multicore version of the cache efficient single core algorithm provides a speedup, relative to the naive single core algorithm, between 7.5 and 14.0 on a 6 core hyperthreaded CPU. Our GPU algorithm for the NVIDIA C2050 is up to 1582 times as fast as the naive single core algorithm and between 5.1 and 11.2 times as fast as the fastest previously known GPU algorithm for Nussinov RNA folding.

Optimal Alignment of Three Sequences On A GPU

J. Li, S. Ranka, and S. Sahni. 6th International Conference on Bioinformatics and Computational Biology (BICoB). 2014.

We develop two algorithms – layered and sloped – to align three sequences on a GPU. Our algorithms can be used to determine the alignment score as well as the actual alignment. Experiments conducted using an NVIDIA C2050 GPU show that our sloped algorithm is 3 three times as fast as the layered one. Further, the sloped algorithm delivers a speedup of up to 90 relative to the single core algorithm running on our host CPU when determining the score of the best alignment and a speedup between 21 and 56 when computing the best alignment as well as its score.

GPU Matrix Multiplication

J. Li, S. Ranka, and S. Sahni. Book chapter in Multicore Computing: Algorithms, Architectures, and Applications (2013): 345. Publisher: Chapman and Hall/CRC. Editors: Sanguthevar Rajasekaran, Lance Fiondella, Mohamed Ahmed, Reda A. Ammar.

In this chapter, we explore the intricacies of programming a GPU to obtain high performance for the multiplication of two single-precision square matrices. We focus our development to NVIDIA’s Tesla series of GPUs of which the C1060 is an example. Our example programs are developed using CUDA.

GPU Alignment of Two and Three Sequences

J. Li, S. Ranka, and S. Sahni. Book chapter in Advances in GPU Research and Practice (to appear in late 2016). Publisher: Elsevier Press. Editor: Hamid Sarbazi-Azad.

In this chapter, we consider the optimal alignment of two and three sequences using Graphics Processing Units (GPUs). The problem of aligning two sequences is commonly referred to as pairwise alignment. Experimental results on the NVIDIA Tesla C2050 are presented.

Cache and Energy Efficient Alignment of Very Long Sequences

C. Zhao and S. Sahni. IEEE ICCABS. 2015.

In the current world, data volume is rapidly growing every minute. Some biological data already reaches petabytes. We propose a new algorithm framework running on a single core which focus on explicitly minimizing the number of data access from the memory to improve the algorithm performance and reduce energy. In the sequence alignment problem, we develop cache and energy efficient algorithms to align very long sequences. These algorithms were evaluated experimentally on a single node of the IBM Blue Gene/Q. We were able to reduce the run time of the classical Myers and Miller linear space alignment algorithm by up to 43%; energy consumption was reduced by up to 45% on our test data.

Cellular-based Population to Enhance Genetic Algorithm for Assignment Problems

H. Rajabalipour Cheshmehgaz, H. Haron, and M. R. Maybodi. American Journal of Intelligent Systems 1(1). 32-36. 2011.

In this paper, we describe a new mechanism of cellular selection as an improved genetic algorithm for some optimization problems like cellular channel assignment, which have multi feasible/optimum solution per one case. Considering the problems and the nature of relationships among individuals in population, we use 2-dimension Cellular Automata in order to place the individuals onto its cells to make the locality and neighborhood on the Hamming distance basis. This idea as 2D Cellular Automata Hamming GA has introduced locality in genetic algorithms and global knowledge for their selection process on cells of 2D cellular automata. The selection based on 2D cellular automata can ensure maintaining population diversity and fast convergence in the genetic search. The cellular selection of individuals is controlled based on the structure of cellular automata, to prevent fast population diversity loss and improve the convergence performance during the genetic search.

A Cellular-rearranging of Population in Genetic Algorithms to Solve Assembly-Line Balancing Problem

H. Rajabalipour Cheshmehgaz, M. I. Desa, and F. Kazemipour. Journal of Mechanical Engineering and Automation 2(2). 25-35. 2012.

Assembly line balancing problem (ALBP) is the allocating of assembly tasks to workstations with consideration of some criteria such as time and the number of workstations. Due to the complexity of ALB, finding the optimum solutions in terms of the number of workstations in the assembly line needs suitable meta-heuristic techniques. Genetic algorithms have been used to a large extent. Due to converging to the local optimal solutions to the most genetic algorithms, the balanced exploration of the new area of search space and exploitation of good solutions by this kind of algorithms as a good way can be sharpened with some meta-heuristic. In this paper, the modified cellular (grid) rearranging-population structure is developed. The individuals of the population are located on cells according to the hamming distance value among individuals as neighbors before regenerations and a family of cellular genetic algorithms (CGAs) is defined. By using the cellular structure and the rearrangements, some of the family members can find better solutions compared with others in the same iterations, and they behave much more reasonably in order to acquire the solution in terms of the number of workstations and the smoothly balanced task assignment on criteria conditions.

Efficient Missing Tag Detection in RFID Systems

W. Luo, S. Chen, T. Li, and S. Chen. Proc. of IEEE INFOCOM, mini-conference. 356-360, 2011.

RFID tags have many important applications in automated warehouse management. One example is to monitor a set of tags and detect whether some tags are missing. The objects to which the missing tags are attached are likely to be missing, too, due to theft or administrative error. Prior research on this problem has primarily focused on efficient protocols that reduce the execution time in order to avoid disruption of normal inventory operations. This paper makes several new advances. First, we observe that the existing protocol is far from being optimal in terms of execution time. We are able to cut the execution time to a fraction of what is currently needed. Second, we study the missing-tag detection problem from a new energy perspective, which is very important when battery-powered active tags are used. The new insight provides flexibility for the practitioners to meet their energy and time requirements.

Probabilistic Missing-tag Detection and Energy-Time Tradeoff in Large-scale RFID Systems

W. Luo, S. Chen, T. Li, and Y. Qiao. Proc. of ACM Mobihoc. 2012.

RFID (radio frequency identification) technologies are poised to revolutionize retail, warehouse, and supply chain management. One of their interesting applications is to automatically detect missing tags (and the associated objects) in a large storage space. In order to catch any missing event (such as theft) in a timely manner, the detection operation may have to be performed frequently. Because RFID systems typically work under low-rate channels, past research has focused on reducing execution time of a detection protocol in order to prevent excessively long protocol execution from interfering with normal inventory operations. However, when active tags are used to provide a large spatial coverage, energy efficiency becomes critical in prolonging the lifetime of these battery-powered tags. Existing literature lacks thorough study on how to conserve energy in the process of missing-tag detection and how to jointly optimize energy efficiency and time efficiency. This paper makes two important contributions. First, we propose a novel protocol design that takes both energy efficiency and time efficiency into consideration. It achieves multi-fold reduction in both energy cost and execution time when comparing with the best existing work. In some cases, the reduction is more than an order of magnitude. Second, we reveal a fundamental energy-time tradeoff in missing-tag detection. Through our analytical framework, we are able to flexibly control the tradeoff through a couple of system parameters in order to achieve desirable performance.

An Efficient Protocol for RFID Multigroup Threshold-based Classification

W. Luo, Y. Qiao, and S. Chen. Proc. of IEEE INFOCOM. 2013.

RFID technology has many applications such as object tracking, automatic inventory control, and supply chain management. They can be used to identify individual objects or count the population of each type of object in a deployment area, no matter whether the objects are passports, retail products, books, or even humans. Most existing work adopts a “flat” RFID system model and performs functions of collecting tag IDs, estimating the number of tags, or detecting the missing tags. However, in practice, tags are often attached to objects of different groups, which may represent a different product type in a warehouse, a different book category in a library, etc. An interesting problem, called multigroup threshold-based classification, is to determine whether the number of objects in each group is above or below a prescribed threshold value. Solving this problem is important for inventory tracking applications. If the number of groups is very large, it will be inefficient to measure the groups one at a time. The best existing solution for multigroup threshold-based classification is based on generic group testing, the design of which is however geared toward detecting a small number of populous groups. Its performance degrades quickly when the number of groups above the threshold becomes large. In this paper, we propose a new classification protocol based on logical bitmaps. It achieves high efficiency by measuring all groups in a mixed fashion. In the meantime, we show that the new method is able to perform threshold-based classification with an accuracy that can be pre-set to any desirable level, allowing tradeoff between time efficiency and accuracy.

Missing-Tag Detection and Energy-Time Tradeoff in Large-Scale RFID Systems with Unreliable Channels

W. Luo, Y. Qiao, and S. Chen. IEEE/ACM Transactions on Networking, vol. 22, issue 4, pp. 1079 – 1091. 2014.

Radio frequency identification (RFID) technologies are poised to revolutionize retail, warehouse, and supply chain management. One of their interesting applications is to automatically detect missing tags in a large storage space, which may have to be performed frequently to catch any missing event such as theft in time. Because RFID systems typically work under low-rate channels, past research has focused on reducing execution time of a detection protocol to prevent excessively long protocol execution from interfering with normal inventory operations. However, when active tags are used for a large spatial coverage, energy efficiency becomes critical in prolonging the lifetime of these battery-powered tags. Furthermore, much of the existing literature assumes that the channel between a reader and tags is reliable, which is not always true in reality because of noise/interference in the environment. Given these concerns, this paper makes three contributions. First, we propose a novel protocol design that considers both energy efficiency and time efficiency. It achieves multifold reduction in both energy cost and execution time when compared to the best existing work. Second, we reveal a fundamental energy-time tradeoff in missing-tag detection, which can be flexibly controlled through a couple of system parameters in order to achieve desirable performance. Third, we extend our protocol design to consider channel error under two different models. We find that energy/time cost will be higher in unreliable channel conditions, but the energy-time tradeoff relation persists.

An Efficient Protocol for RFID Multigroup Threshold-based Classification Based on Sampling and Logical Bitmap

W. Luo, Y. Qiao, S. Chen, and M. Chen. IEEE/ACM Transactions on Networking, Vol. PP, issue 99, pp. 1. 2014.

Most existing research adopts a “flat” view of radio frequency identification (RFID) systems to perform various functions of collecting tag IDs, estimating the number of tags, detecting the missing tags, etc. However, in practice, tags are often attached to objects of different groups, which may represent different product types in a warehouse, different book categories in a library, etc. As we move from a flat view to an organized group view, many interesting problems arise. One of them, called multigroup threshold-based classification, is the focus of this paper. It is to determine whether the number of objects in each group is above or below a prescribed threshold value. Solving this problem is important for inventory tracking applications. If the number of groups is very large, it will be inefficient to measure the groups one at a time. The best existing solution for multigroup threshold-based classification is based on generic group testing, whose design is however geared toward detecting a small number of populous groups. Its performance degrades quickly when the number of groups above the threshold becomes large. In this paper, we propose a new classification protocol based on tag sampling and logical bitmaps. It achieves high efficiency by measuring all groups in a mixed fashion. In the meantime, we show that the new method is able to perform threshold-based classification with an accuracy that can be preset to any desirable level, allowing tradeoff between time efficiency and accuracy.

Privacy-preserving RFID Authentication based on Cryptographical Encoding

T. Li, W. Luo, Z. Mo, and S. Chen. Proc. of IEEE INFOCOM, pp. 2174-2182. 2012.

Radio Frequency Identification (RFID) technology has been adopted in many applications, such as inventory control, object tracking, theft prevention, and supply chain management. Privacy-preserving authentication in RFID systems is a very important problem. Existing protocols employ tree structures to achieve fast authentication. We observe that these protocols require a tag to transmit a large amount of data in each authentication, which costs significant bandwidth and energy overhead. Current protocols also impose heavy computational demand on the RFID reader. To address these issues, we design two privacy-preserving protocols based on a new technique called cryptographical encoding, which significantly reduces both authentication data transmitted by each tag and computation overhead incurred at the reader. Our analysis shows that the new protocols are able to reduce authentication data by more than an order of magnitude and reduce computational demand by about an order of magnitude, when comparing with the best existing protocol.

End-to-End Maxmin Fairness in Multihop Wireless Networks: Theory and Protocol

L. Zhang, W. Luo, S. Chen, and Y. Jian. Journal of Parallel and Distributed Computing, vol. 72, issue 3, pp. 462-474, 2012.

To promote commercial deployment of multihop wireless networks, the research/industry communities must develop new theories and protocols for flexible traffic engineering in these networks in order to support diverse user applications. This paper studies an important traffic engineering problem – how to support fair bandwidth allocation among all end-to-end flows in a multihop wireless network – which, in a more precise term, is to achieve the global maxmin fairness objective in bandwidth allocation. There exists no distributed algorithm for this problem in multihop wireless networks using IEEE 802.11 DCF. We have two major contributions. The first contribution is to develop a novel theory that maps the global maxmin objective to four local conditions and prove their equivalence. The second contribution is to design a distributed rate adjustment protocol based on those local conditions to achieve the global maxmin objective through fully distributed operations. Comparing with the prior art, our protocol has a number of advantages. It is designed for the popular IEEE 802.11 DCF. It replaces per-flow queueing with per-destination queueing. It achieves far better fairness (or weighted fairness) among end-to-end flows.

An Efficient Tag Search Protocol in Large-Scale RFID Systems

M. Chen, W. Luo, Z. Mo, S. Chen, and Y. Fang. Proc. of IEEE INFOCOM. 2013.

Radio frequency identification (RFID) technology has many applications in inventory management, supply chain, product tracking, transportation and logistics. One research issue of practical importance is to search for a particular group of tags in a large-scale RFID system. Time efficiency is a core factor that must be taken into consideration when designing a tag search protocol to ensure scalability. In this paper, we design a new technique called filtering vector, which can significantly reduce transmission overhead during search process, thereby shortening search time. Based on this technique, we propose an iterative tag search protocol. In each round, we filter out some tags and eventually terminate the search process when the search result meets the accuracy requirement. The simulation results demonstrate that our protocol performs much better than the best existing ones.

An Efficient Tag Search Protocol in Large-Scale RFID Systems with Noisy Channel

M. Chen, W. Luo, Z. Mo, S. Chen, and Y. Fang. IEEE/ACM Transactions on Networking, vol. PP, issue 99, pp. 1, 2015.

Radio frequency identification (RFID) technology has many applications in inventory management, supply chain, product tracking, transportation, and logistics. One research issue of practical importance is to search for a particular group of tags in a large-scale RFID system. Time efficiency is a crucial factor that must be considered when designing a tag search protocol to ensure its execution will not interfere with other normal inventory operations. In this paper, we design a new technique called filtering vector, which can significantly reduce transmission overhead during the search process, thereby shortening search time. Based on this technique, we propose an iterative tag search protocol. In each round, we filter out some tags and eventually terminate the search process when the search result meets the accuracy requirement. Furthermore, we extend our protocol to work under noisy channel. The simulation results demonstrate that our protocol performs much better than the best existing work.

Scan Detection in High-Speed Networks Based on Optimal Dynamic Bit Sharing

T. Li, S. Chen, W. Luo, and M. Zhang. IEEE INFOCOM, pp. 3200-3208, 2011.

Scan detection is one of the most important functions in intrusion detection systems. In order to keep up with the ever-higher line speed, a recent research trend is to implement scan detection in fast but small SRAM. This leads to a difficult technical challenge, because the amount of traffic to be monitored is huge but the on-die memory space for performing such a monitoring task is very limited. We propose an efficient scan detection scheme based on dynamic bit sharing, which incorporates probabilistic sampling and bit sharing for compact information storage. We design a maximum likelihood estimation method to extract per-source information from the shared bits in order to determine the scanners. Our new scheme ensures that the false positive/false negative ratios are bound with high probability. Moreover, given an arbitrary set of bounds, we develop a systematic approach to determine the optimal system parameters that minimize the amount of memory needed to meet the bounds. Experiments based on a real Internet traffic trace demonstrate that the proposed scan detection scheme reduces memory consumption by three to twenty times when comparing with the best existing work.

Spreader Classification Based on Optimal Dynamic Bit Sharing

T. Li, S. Chen, W. Luo, M. Zhang, and Y. Qiao. IEEE/ACM Transactions on Networking, vol. 21, issue 3, pp. 817-830, 2013

Spreader classification is an online traffic measurement function that has many important applications. In order to keep up with ever-higher line speed, the recent research trend is to implement such functions in fast but small on-die SRAM. However, the mismatch between the huge amount of Internet traffic to be monitored and limited on-die memory space presents a significant technical challenge. In this paper, we propose an Efficient Spreader Classification (ESC) scheme based on dynamic bit sharing, a compact information storage method. We design a maximum likelihood estimation method to extract per-source information from the compact storage and determine the heavy spreaders. Our new scheme ensures that false positive/negative ratios are bound. Moreover, given an arbitrary set of bounds, we develop a systematic approach to determine the optimal system parameters that minimize the amount of memory needed to meet the bounds. Experiments based on a real Internet traffic trace demonstrate that the proposed spreader classification scheme reduces memory consumption by three to 20 times when compared to the best existing work. We also investigate a new multi-objective spreader classification problem and extend our classification scheme to solve it.

How can network coding help P2P content distribution?

G. Ma, Y. Xu, K. Ou, and W. Luo. IEEE International Conference on Communications (ICC’09), 2009.

It is well known that network coding can enhance the performance of peer-to-peer (P2P) content distribution systems since it benefits block scheduling. In this paper, we introduce our P2P content distribution system “SmartCode” with sparse network coding on PlanetLab. SmartCode uses pre-checking to avoid linearly dependent blocks being transmitted.

Design and Representation of Complex Objects in Database Systems

Lin Qi, Huiyuan Zhang, and Markus Schneider. 23rd ACM SIGSPATIAL Int. Conf. on Advances in Geographic Information Systems (ACM SIGSPATIAL GIS). 2015.

In recent decades, applications like geospatial, genomic, and multimedia have made use of very large and diverse application objects such as spatial networks and protein structures. These objects are complex in the sense that they are highly structured and of variable size. Storing, accessing, and manipulating them in a standard and efficient manner is very challenging. The state-of-the-art solutions handle such objects by using file system formats like HDF and XML, serialization technique like Protocol Buffers, and BLOB data type in databases. However, specialized file format solutions lack any well-established database system features, and neither a uniform concept nor mechanisms exist for supporting complex objects for BLOBs. In this article, a novel and database-friendly framework of specifying and interpreting complex objects is proposed. Empirical studies have shown that our approach outperforms prevailing methods with efficient processing time and less storage consumption.

Adaptive scheduling for TCP-fairness in wireless broadband networks

Balakrishnan K, Ritesh Kumar Kalle, and Debabrata Das. Wireless Multi-Access Environments and Quality of Service Provisioning: Solutions and Application, IGI Global, 161-186. 2011.

The exponential growth in multimedia traffic predominantly on UDP transport, poses a threat to the TCP’s best effort throughput. This problem is more acute in last mile broadband wireless access networks. Most scheduling algorithms discuss improving the combined TCP and UDP throughput or improving the TCP throughput without studying the effects of inelastic traffic such as UDP. This chapter furthers the necessity for TCP throughput protection and proposes a novel dynamically adapting a weighted fair queue (WFQ)-based scheduling mechanism that provides a higher degree of TCP protection. This is accomplished by differentiating between TCP and UDP flows, buffer provisioning for each flow, and prioritizing TCP ACK packets. The simulation results show that the proposed mechanism yields a relative improvement of up to 29% of TCP goodput and 7.5% of aggregate MAC throughput over the mechanism without the proposed improvements.

Evaluation and optimization of Short TCP completion time over wireless broadband access networks

Balakrishnan K and Debabrata Das. Wireless Networks, Springer. 1549-1562, December 2014.

TCP has always been the most prominent protocol used on the Internet. More than 90% of the HTTP session on the Internet has less than 20 KB data to send and finish the whole transaction in the slow start phase of TCP. These short TCP transactions face high latency and network underutilization problem in bottleneck last mile wireless broadband connections due to a higher bit error rate in wireless when compared with a wired connection. In this paper, we evaluate short TCP by simulating it over wireless broadband networks to bring out the open challenges in optimizing the short TCP’s completion time. We also derive a mathematical model for estimating and optimizing the completion time of short TCP connections over broadband wireless networks. We extensively examine our model for short TCP and propose a generic link level scheduler mechanism with O(1) complexity that uses cross layer information to optimize the completion time of connection. Our result shows that the mathematical model matches the simulation results, and we also show that the proposed novel scheduler speeds up a short TCP connection up to 13%, which in turn increases the quality of service of the end user without affecting other connections.

WEBS: WiMAX Emulation Testbed to Benchmark Streaming Multimedia QoS

Ritesh Kumar K, Mayank Raj, Balakrishnan K, and Debabrata Das. IEEE IMSAA. 2009.

New generation multimedia applications over broadband wireless networks like WiMAX1 consume relatively large amount of battery power. Most of the studies in this field have been focused either on the power conservation or the video quality evaluation aspect of WiMAX based on early stage simulations. However, network emulation studies are preferred during the design phase to guide product designs. To the best of our knowledge, no emulator has been reported that evaluates a power saving algorithm’s impact on perceptual quality of video over WiMAX. In this paper we describe WEBS, a novel WiMAX network emulation testbed based on the well accepted Qualnet simulator to benchmark quality of streaming video along with its impact on energy conservation mechanism in WiMAX. In particular, we observe in our emulation that irrespective of the motion content in the video, the WiMAX subscriber device spends around 45 to 55% of time in sleep mode during the video streaming period for our simulation parameters. The perceptual distortion in the case of videos having greater motion component is around three times greater than those having lower motion components for the same coding bit rate. We also find that for moderate motion content videos, the perceptual distortion when operated with the power saving mode enabled is almost equal to that when operated without it. Thus our results show that for low and moderate motion content videos, by enabling power saving on the WiMAX device, battery life time can be extended by approximately 50% without having to reduce video quality.

Implementation of a Simulator for finding Optimum Number of Connections and Window Size for Efficient End-to-End TCP File Transfer

Rakesh Kashyap H. P, K. Balakrishnan, and Debabrata Das. ICECIT. 2013.

In the present world, the number of Internet users has increased by many folds and is still increasing exponentially. In such a scenario, minimizing the queuing delay, reducing the packet loss and maximizing the throughput are prime goals of most researchers. Various research works have proved that setting up of multiple TCP connections between two hosts instead of one TCP connection do provide a fair solution to the problem. Also, finding the optimum congestion window size (window size for which packet dropping is minimum in each of these TCP connections) is the primary goal of many research studies. We have developed a network simulator and tested it for providing answers to various questions posed by setting up of multiple TCP connections for a file upload or download. One of the important issues solved by this simulator is to find out the optimum number of connections and the optimum congestion window size that needs to be established for a quick file transfer using the TCP protocol for a particular network condition as revealed in some of the results in this paper.

A flexible three-level logistic network design considering cost and time criteria with a multi-objective evolutionary algorithm

H. Rajabalipour Cheshmehgaz, M. I. Desa, and Antoni Wibowo. Journal of Intelligent Manufacturing 24(2), 277-293. 2013.

Nowadays, time and cost are familiar criteria for every logistic provider, and both have been long treated to be minimized simultaneously. However, the criteria are naturally conflicted even with flexibilities and/or constraints appeared in the logistic networks. This paper is concerned with three-level logistic networks with potential suppliers, distributed centers (DCs), and deterministic demands from available consumers. The networks also benefit from potential direct shipments from suppliers to consumers as long as suppliers’ and DCs’ facilities might face limited capacity in their own seasonal supplying and warehousing processes. The goal is to (re)configure the networks in order to minimize response time to consumers, transportation cost and facility cost. Therefore, the networks are formulated as multiple criteria decision making problems that have more than one configuration through the time and cost optimizing at the same time. Due to the flexibility and the constraints, the decision maker(s) needs a set of compromise solutions for the networks that represent optimal configurations based on the objectives without considering prior knowledge. To this end, the problems are formulated into four individual logistic network models varying with the flexibility option and/or the capacitated facilities. To find the compromise solutions, a Pareto-based multi-objective evolutionary algorithm, NSGA-II is customized and then utilized to deal with an illustrative case study. The results are analyzed through two performance measures: hypervolume and the number of optimal solutions obtained so far.

A polar-based guided multi-objective evolutionary algorithm to search for optimal solutions interested by decision-makers in a logistics network design problem

H. Rajabalipour Cheshmehgaz, Md. Nazrul Islam, and M. I. Desa. Journal of Intelligent Manufacturing 25(4), 699-726. 2014.

In practical multi-objective optimization problems, respective decision-makers might be interested in some optimal solutions that have objective values closer to their specified values. Guided multi-objective evolutionary algorithms (guided MOEAs) have been significantly used to guide their evolutionary search direction toward these optimal solutions using by decision makers. However, most guided MOEAs need to be iteratively and interactively evaluated and then guided by decision-makers through re-formulating or re-weighting objectives, and it might negatively affect the algorithm’s performance. In this paper, a novel guided MOEA that uses a dynamic polar-based region around a particular point in objective space is proposed. Based on the region, new selection operations are designed such that the algorithm can guide the evolutionary search toward optimal solutions that are close to the particular point in objective space without the iterative and interactive efforts. The proposed guided MOEA is tested on the multi-criteria decision-making problem of flexible logistics network design with different desired points. Experimental results show that the proposed guided MOEA outperforms the two most effective guided and non-guided MOEAs, R-NSGA-II and NSGA-II.

Reliability analysis of evacuation routes under capacity uncertainty of road links

Gino Lim, Mukesh Rungta, and MohammadReza Baharnemati. IIE Transactions 47(1). January 2015.

This article presents a reliability-based evacuation route planning model that seeks to find the relationship between the clearance time, number of evacuation paths, and congestion probability during an evacuation. Most of the existing models for network evacuation assume deterministic capacity estimates for road links without taking into account the uncertainty in capacities induced by myriad external conditions. Only a handful of models exist in the literature that account for capacity uncertainty of road links. A dynamic network–based evacuation model is extended by incorporating probabilistic arc capacity constraints, and a minimum-cost network flow problem is formulated that finds a lower bound on the clearance time within the framework of a chance-constrained programming technique. Network breakdown minimization principles for traffic flow in evacuation planning problem are applied, and a path-based evacuation routing and scheduling model is formulated. Given the horizon time for evacuation, the model selects the evacuation paths and finds flows on the selected paths that result in the minimum congestion in the network along with the reliability of the evacuation plan. Numerical examples are presented and the effectiveness of the stochastic models in evacuation planning is discussed. It is shown that the reliability-based evacuation plan is conservative compared with plans made using a deterministic model. Stochastic models guarantee that congestion can be avoided with a higher confidence level at the cost of an increased clearance time.

Designing Production-Inventory-Transportation Systems with Capacitated Cross-Docks

Hossein Abouee-Mehrizi, Oded Berman, and MohammadReza Baharnemati. Transportation Science 448(1):121-135. January 2014.

We consider a two-echelon supply chain problem, in which the demand of a set of retailers is satisfied from a set of suppliers and shipped through a set of capacitated cross-docks that are to be established. The objective is to determine the number and location of cross-docks and the assignment of retailers to suppliers via cross-docking so that the total cost of pipeline and retailers inventory, transportation, and facility location is minimized. We formulate the problem as a nonlinear mixed integer programming. We first derive several structural results for special cases of the problem. We also demonstrate that the Capacitated Plant Fixed-Charge Transport Location Problem is a special case of our problem. To solve the general problem, we show that it can be written as a cutting stock problem and develop a column generation algorithm to solve it. We investigate the efficiency of the proposed algorithm numerically. We then extend the problem by allowing different truck capacities as decision variables.

Optimal egress time calculation and path generation for large evacuation networks

Mukesh Rungta, Gino Lim, and MohammadReza Baharnemati. Annals of Operations Research 201(1). November 2012.

Finding the optimal clearance time and deciding the path and schedule of evacuation for large networks have traditionally been computationally intensive. In this paper, we propose a new method for finding the solution for this dynamic network flow problem with considerably lower computation time. Using a three-phase solution method, we provide solutions for the required clearance time for complete evacuation, the minimum number of evacuation paths required for evacuation in least possible time, and the starting schedules on those paths. First, a lower bound on the clearance time is calculated using minimum cost dynamic network flow model on a modified network graph representing the transportation network. Next, a solution pool of feasible paths between all O-D pairs is generated. Using the input from the first two models, a flow assignment model is developed to select the best paths from the pool and assign flow and decide schedule for evacuation with lowest clearance time possible. All of the proposed models are mixed integer linear programing models and formulation is done for the System Optimum (SO) scenario where the emphasis is on complete network evacuation in minimum possible clearance time without any preset priority. We demonstrate that the model can handle large size networks with low computation time. A numerical example illustrates the applicability and effectiveness of the proposed approach for evacuation.

A Capacitated Network Flow Optimization Approach for Short Notice Evacuation Planning

Gino Lim, Shabnam Zangeneh, MohammadReza Baharnemati, and Tiravat Assavapokee. European Journal of Operational Research 223(1). November 2012.

We present a capacity constrained network flow optimization approach for finding evacuation paths, flows, and schedules so as to maximize the total evacuees for short notice evacuation planning (SNEP). Due to the enormous size of the resulting evacuation network and the computational complexity, we have developed Evacuation Scheduling Algorithm (ESA) to expedite the solution process. ESA utilizes Dijkstra’s algorithm for finding the evacuation paths and a greedy algorithm for finding the maximum flow of each path and schedule to execute the flow for each time interval. Computational complexity of ESA is discussed. Numerical experiments show a tremendous advantage of ESA over an exact algorithm in computation time.

Tabu search and lower bounds for a combined production–transportation problem

A. Condotta, S. Knust, D. Meier, and N. V. Shakhlevich. Computers & Operations Research 886-900. 2013.

In this paper we consider a combined production–transportation problem, in which n jobs have to be processed on a single machine at a production site before they are delivered to a customer. At the production stage, a release date is given for each job; at the transportation stage, job delivery should be completed no later than a given due date. The transportation is done by m identical vehicles with limited capacity. It takes a constant time to deliver a batch of jobs to the customer. The objective is to find a feasible schedule minimizing the maximum lateness. After formulating the considered problem as a mixed integer linear program, we propose different methods to calculate lower bounds. Then we describe a tabu search algorithm that enumerates promising partial solutions for the production stage. Each partial solution is complemented with an optimal transportation schedule (calculated in polynomial time) achieving a coordinated solution to the combined production–transportation problem. Finally, we present the results of computational experiments on a randomly generated data.

Support vector machines for seizure detection in an animal model of chronic epilepsy

M. Nandan, S. S. Talathi, S. Myers, W. L. Ditto, P. P. Khargonekar, and P. R. Carney. Journal of Neural Engineering. 2010.

We compare the performance of three support vector machine (SVM) types: weighted SVM, one-class SVM and support vector data description (SVDD) for the application of seizure detection in an animal model of chronic epilepsy. Large EEG datasets (273 h and 91 h respectively, with a sampling rate of 1 kHz) from two groups of rats with chronic epilepsy were used in this study. For each of these EEG datasets, we extracted three energy-based seizure detection features: mean energy, mean curve length, and wavelet energy. Using these features we performed twofold cross-validation to obtain the performance statistics: sensitivity, specificity, and detection latency as a function of control parameters for the given SVM. Optimal control parameters for each SVM type that produced the best seizure detection statistics were then identified using two independent strategies. Performance of each SVM type is ranked based on the overall seizure detection performance through an optimality index. We found that SVDD not only performed better than the other SVM types in terms of highest value of the mean optimality index metric (O¯ ) but also gave a more reliable performance across the two EEG datasets.

Accumulated risk of body postures in assembly line balancing problem and modeling through a multi-criteria fuzzy-genetic algorithm

H. Rajabalipour Cheshmehgaz, H. Haron, F. Kazemipour, and M. I. Desa. Computers & Industrial Engineering 63 (2), 503–512. 2012.

Monotonous body postures during repetitive jobs negatively affect assembly-line workers with the developing of work-related musculoskeletal disorders (WMSDs). Ergonomics specialists have offered auxiliary posture diversity to deal with the lack of varieties, especially for high-risk ones. Meanwhile, the Assembly Line Balancing (ALB) problem has been recognized as a prior thinking to (re)configure assembly lines via the balancing of their tasks among their workstations. Some conventional criteria, cycle time, and overall workload are often considered during the balancing. This paper presents a novel model of the ALB problem that incorporates assembly worker postures into the balancing. In addition to the conventional ALB criteria, a new criterion of posture diversity is defined and contributes to enhance the model. The proposed model suggests configurations of assembly lines via the balancing, so that the assigned workers encounter opportunities to change their body postures regularly. To address uncertainties in the conventional criteria, a fuzzy goal programming is used and an appropriate genetic algorithm is developed to deal with the model. Various computational tests are performed on the different models made with combinations of the three criteria mentioned above. Comparing the payoffs among the combinations, results show that well balanced task allocation can be obtained through the proposed model.

Real time epileptic seizure onset detection using approximate extreme points SVM

M. Nandan, P. P. Khargonekar, and S. S. Talathi. ICML 2013 Workshop on Role of Machine Learning in Transforming Healthcare. 2013.

We present a novel algorithm for real time epileptic seizure onset detection, based on approximate extreme point support vector machine. We demonstrate the utility of our method to this application, through theoretical and experimental results. We evaluated our approach using the large CHB-MIT database (846 hours) of scalp EEG recordings of pediatric patients. Our method detected 93% of the seizures with a mean detection latency of 7.5 seconds and a false positive rate of 0.9 per 24 hours. We compared our method with an SVM based seizure detector and found the method to be on average five times faster in training and two and a half times faster in classification. Our results indicate the suitability of our approach for use in closed loop seizure controllers.

Scheduling patient appointments via multilevel template: A case study in chemotherapy

A. Condotta and N.V. Shakhlevich. Operations Research for Health Care 129-144. 2014.

This paper studies a multi-criteria optimization problem that appears in the context of booking chemotherapy appointments. The main feature of the model under study is the requirement to book for each patient multiple appointments that should follow a pre-specified multi-day pattern. Each appointment involves several nurse activities that should also follow a pre-specified intra-day pattern. The main objectives are to minimize patients’ waiting times and peaks of nurses’ workload for an outpatient clinic. Our solution approach is based on the concept of a multi-level template schedule that is generated for a set of artificial patients with typical treatment patterns. There are two stages in template generation: the multi-day stage, which fixes appointment dates for all artificial patients; and the intra-day stage, which fixes for each day appointment starting times and patient allocation to nurses. The running schedule is created by considering actual patients one by one as they arrive at the clinic. Booking appointments for each new patient is performed by assigning appropriate dates and times of the template schedule following the prescribed multi-day and intra-day patterns. Additional rescheduling procedure is used to re-optimize intra-day schedules on a treatment day or shortly beforehand. The key stages of the scheduling process are modeled as integer linear programs and solved using CPLEX solver. We demonstrate the effectiveness of our approach through case-based scenarios derived from a real clinic and discuss the advantages that the multi-level template can bring.

Multi-objective Nurse Scheduling Models with patient workload and nurse preferences

G.J. Lim, A. Mobasher, and M. Cote. Management. November 2012.

The purpose of this paper is to develop a nurse scheduling model that simultaneously addresses a set of multiple (and oftentimes conflicting) objectives in determining an optimal nurse schedule. The objectives we consider are minimizing nurse labor costs, minimizing patient dissatisfaction, minimizing nurse idle time, and maximizing job satisfaction. We formulate a series of multi-objective binary integer programming models for the nurse scheduling problem in which both nurse shift preferences as a proxy for job satisfaction and patient workload as a proxy for patient dissatisfaction are considered in our models. A two-stage non-weighted goal programming solution method is provided to find an efficient solution that addresses the multiple objectives. Numerical results show that considering patient workload in the optimization models can make positive impacts in nurse scheduling by (1) improving nurse utilization while keeping higher nurse job satisfaction, and (2) minimizing unsatisfied patient workloads.

Daily scheduling of nurses in operating suites

A. Mobasher, G. Lim, J.F. Bard, and V. Jordan. IIE Transactions on Healthcare Systems Engineering Vol. 1, Issue 4, pp 232-246. December 2011.

This paper provides a new multi-objective integer programming model for the daily scheduling of nurses in operating suites. The model is designed to assign nurses to different surgery cases based on their specialties and competency levels, subject to a series of hard and soft constraints related to nurse satisfaction, idle time, overtime, and job changes during a shift. To find solutions, two methodologies were developed. The first is based on the idea of a solution pool, and the second is a variant of modified goal programming. The model and the solution procedures were validated using real data provided by the University of Texas MD Anderson Cancer Center in Houston, Texas. The results show that the two methodologies can produce schedules that satisfy all demand with 50% less overtime and 50% less idle time when benchmarked against current practice.

Fast SVM Training Using Approximate Extreme Points

M. Nandan, P. P. Khargonekar, and S. S. Talathi. Journal of Machine Learning Research. 2014.

Applications of non-linear kernel support vector machines (SVMs) to large data sets is seriously hampered by its excessive training time. We propose a modification, called the approximate extreme points support vector machine (AESVM), which is aimed at overcoming this burden. Our approach relies on conducting the SVM optimization over a carefully selected subset, called the representative set, of the training data set. We present analytical results that indicate the similarity of AESVM and SVM solutions. A linear time algorithm based on convex hulls and extreme points is used to compute the representative set in kernel space. Extensive computational experiments on nine data sets compared AESVM to other state-of-the-art SVM implementations. Our AESVM implementation was found to train much faster than the other methods, while its classification accuracy was similar to that of LIBSVM in all cases. In particular, for a seizure detection data set, AESVM training was almost 500 times faster than LIBSVM and LASVM and 20 times faster than CVM and BVM. AESVM also gave competitively fast classification times.

A comparison of local search algorithms with population-based algorithms in hybrid flow shop scheduling problems with realistic characteristics

M.A. Bozorgirad and R Logendran. The International Journal of Advanced Manufacturing Technology 1-17. 2015.

In this research, a bi-criteria group scheduling problem is investigated in a hybrid flow shop (HFS) environment, wherein the parallel machines are unrelated. The objective of the problem is to minimize a linear combination of the total weighted completion time being mindful of the producer and the total weighted tardiness being mindful of the customers. The underlying assumption is that the jobs are released into the system in dynamic times. The machine ready times are considered to be dynamic as well. Sequence-dependent setup times are required for changing the process between groups of jobs. The runtimes of jobs are assumed to be decreasing as the workers learn how to process similar jobs. A mixed-integer linear programing model is developed for the problem. However, since the problem is non-deterministic polynomial-time hard (NP-hard), it may not be solved to optimality within a reasonable time. This research comprehensively addresses the question of what type of meta-heuristic algorithm is more appropriate for solving these problems. In particular, local search algorithms are compared to the population-based algorithms with respect to the permutation or non-permutation properties of the optimal solution. Three algorithms based on tabu search, as well as three algorithms based on simulated annealing, are developed to represent the local search algorithms. Two other algorithms are developed based on genetic algorithm (GA) to exemplify the population-based algorithms. The results of a comprehensive experimental design reveal that GA-based algorithms have greater potential for identifying better quality solutions in HFS scheduling problems compared to the local search algorithms.

Developing Tight Lower Bounds for Hybrid Flow Shop Scheduling Problems

M.A. Bozorgirad and R. Logendran. IIE Annual Conference Proceedings 341. 2014.

In this research, a lower bounding mechanism is proposed for a group scheduling problem on a hybrid flow shop environment. The objective function of the scheduling problem is to minimize a linear combination of total weighted completion time and total weighted tardiness of all jobs. The underlying assumptions include a sequence-dependent setup time for changing the process between different groups of jobs, non-zero release times for jobs, stage skipping allowance for jobs, and unrelated parallel machines in different stages. Since this problem is strongly NP-hard, nonexact algorithms are the main approaches for dealing with large-scale instances of this problem. However, a tight lower bound is required to precisely evaluate performances of such algorithms. We propose a lower bounding mechanism, based on the column generation algorithm that is able to find lower bounds which are remarkably tighter than the bounds from CPLEX. To demonstrate this superiority, seven sample problems are tested with the proposed mechanism as well as CPLEX. The average gap between the lower bounds and upper bounds obtained from CPLEX for these sample problems is 216.5%, while this gap for the proposed lower bounding mechanism (from the upper bound of CPLEX) is reduced to a remarkably low 13.5%.

Bi-criteria group scheduling in hybrid flowshops

M.A. Bozorgirad and R. Logendran. International Journal of Production Economics 145 (2), 599-612. 2013.

We consider a group scheduling problem in a hybrid flowshop in which the parallel machines in one or more stages of the flowshop are unrelated and have different run times for the same job. The objective of the problem is to simultaneously decrease the producer’s cost by minimizing the Work-In-Process inventory (WIP) or equivalently the total weighted completion time, and increase the customers’ satisfaction by minimizing the total weighted tardiness. All of the jobs and machines may not be ready at time zero, meaning that they can be released at different times during the scheduling period. The setup time required to switch between processing jobs from different families is considered to be sequence-dependent. A mixed-integer linear programming model is developed to mathematically represent this problem and obtain optimal solutions for small size problems. Since the problem is among the strongly NP-hard problems, four efficient algorithms based on tabu search are proposed to find optimal/near optimal solutions. The efficacies of these algorithms are compared to each other by means of a comprehensive statistical analysis, and the best algorithm is identified. Furthermore, the efficiency and effectiveness of the proposed search algorithms are verified by comparing the results of these algorithms with optimal solutions obtained from CPLEX for small size problems.

Robust Design of North American Power Grid to Mitigate Cascading Failures

J.R. Piacenza, I.Y. Tumer, J.J. Fields, M.A. Bozorgirad, and C. Hoyle. ASME 2013 International Mechanical Engineering Congress and Exposition. 2013.

Catastrophic cascading system failures such as the August 13 Blackout of 2003 highlight the vulnerability of the North American power grid and emphasize the need for research to mitigate failure events. The incorporation of robust design, and the insensitivity of system performance in the presence of noise (or uncertainty) from both internal and external sources, into existing and future power grid design strategies can increase system reliability. This paper presents a high-level topological network approach to power grid robust optimization as a solution for designing against cascading system failure. A mathematical model was created representing a standard power grid network consisting of generation and demand nodes, as well as node connections based on actual topological transmission line relationships. Each node possesses unique power generation or demand attributes, and various network connection configurations are examined based on system demand requirements. In this model, failure events are initiated by the removal of a single network connection, and remaining loads are redistributed throughout the system. Cascading failure effects are captured when the existing network configuration cannot support the resulting demand load, and transmission line failures continue to propagate until the system again reaches a steady state, based on remaining nodes and connections. The primary goal of this research is to facilitate an understanding of design trade-offs between system robustness and performance objectives. In this research, robustness is defined as the resilience to initiating faults, where a robust network has the ability to meet system generation requirements despite propagating network failures. Primary performance objectives are total system cost and the ability to satisfy network demand after a failure, while robustness is represented as the lack of variability in the amount of demand which is satisfied after a failure. By understanding network reactions due to cascading failures, as well as performance trade-offs required to mitigate these failures, reliability in power grid systems can be increased.

Hybrid flowshop scheduling problem with a bi-criteria objective and group technology assumption

M.A. Bozorgirad and R. Logendran. IIE Annual Conference Proceedings. 2013

In this research, a group scheduling problem with a bicriteria objective is addressed to minimize a linear combination of the total weighted completion time and the total weighted tardiness of the jobs. The motivation for doing so is that the former implicitly minimizes the work-in-process inventory, thus respecting the interest of the producer, and the latter, clearly respecting the interests of the customers. The underlying assumption is that the jobs are processed in a hybrid flowshop environment with unrelated-parallel machines. These machines have different capabilities, meaning not all of them are capable of processing all the jobs. In addition, the problem follows the group technology assumption, meaning jobs in a group must be processed consecutively on a machine, with sequence-dependent setup time when a changeover from one group to another is required. Since the problem is strongly NP-hard, two tabu search-based algorithms are developed to find the optimal/near optimal solution for this problem. Algorithm 1 looks for non-permutation solutions, while algorithm 2 finds solutions similar to permutation schedules. Testing both of the algorithms on a sample of two thousand randomly generated problems, and comparing the results with the help of a paired t-test, confirms that algorithm 2 outperforms algorithm 1.

Sequence-dependent group scheduling problem on unrelated-parallel machines

M.A. Bozorgirad and R. Logendran. Expert Systems with Applications 39 (10), 9021-9030. 2012.

In this research we address a sequence-dependent group scheduling problem on a set of unrelated parallel machines where the run time of each job differs on different machines. To benefit both producer and customers we attempt to minimize a linear combination of total weighted completion time and total weighted tardiness. Since the problem is shown to be NP-hard, meta-heuristic algorithms based on tabu search are developed to find the optimal/near optimal solution. For some small size yet complex problems, the results from these algorithms are compared to the optimal solutions found by CPLEX. The result obtained in all of these problems is that the tabu search algorithms could find solutions at least as good as CPLEX but in drastically shorter computational time, thus signifying the high degree of efficiency and efficacy attained by the former.

An effective model of multiple multi-objective evolutionary algorithms with the assistance of regional multi-objective evolutionary algorithms: VIPMOEAs

H. Rajabalipour Cheshmehgaz, M. I. Desa, and Antoni Wibowo Applied Soft Computing 13(5), 2863–2895. 2013.

Division of the evolutionary search among multiple multi-objective evolutionary algorithms (MOEAs) is a recent advantage in MOEAs design, particularly in effective parallel and distributed MOEAs. However, most these algorithms rely on such a central (re) division that affects the algorithms’ efficiency. This paper first proposes a local MOEA that searches on a particular region of objective space with its novel evolutionary selections. It effectively searches for Pareto Fronts (PFs) inside the given polar-based region, while nearby the region is also explored, intelligently. The algorithm is deliberately designed to adjust its search direction to outside the region – but nearby – in the case of a region with no Pareto Front. With this contribution, a novel island model is proposed to run multiple forms of the local MOEA to improve a conventional MOEA (e.g. NSGA-II or MOEA/D) running along – in another island. To dividing the search, a new division technique is designed to give particular regions of objective space to the local MOEAs frequently and effectively. Meanwhile, the islands benefit from a sophisticated immigration strategy without any central (re)collection, (re)division and (re)distribution acts. Results of three experiments have confirmed that the proposed island model mostly outperforms to the clustering MOEAs with similar division technique and similar island models on DTLZs. The model is also used and evaluated on a real-world combinational problem, flexible logistic network design problem. The model definitely outperforms to a similar island model with conventional MOEA (NSGA-II) used in each island.

Effective local evolutionary searches distributed on an island model solving bi-objective optimization problems

H. Rajabalipour Cheshmehgaz, M. I. Desa, and Antoni Wibowo. Applied Intelligence 38,(3), 331-356. 2013.

Using multiple local evolutionary searches, instead of single and overall search, has been an effective technique to solve multi-objective optimization problems (MOPs). With this technique, many parallel and distributed multi-objective evolutionary algorithms (dMOEAs) on different island models have been proposed to search for optimal solutions, efficiently and effectively. These algorithms often use local MOEAs on their islands in which each local search is considered to find a part of optimal solutions. The islands (and the local MOEAs), however, need to communicate to each other to preclude the possibility of converging to local optimal solutions. The existing dMOEAs rely on the central and iterative process of subdividing a large-scale population into multiple subpopulations; and it negatively affects the dMOEAs performance. In this paper, a new version of dMOEA with new local MOEAs and migration strategy is proposed. The respective objective space is first subdivided into the predefined number of polar-based regions assigned to the local MOEAs to be explored and exploited. In addition, the central and iterative process is eliminated using a new proposed migration strategy. The algorithms are tested on the standard bi-objective optimization test cases of ZDTs, and the result shows that these new dMOEAs outperform the existing distributed and parallel MOEAs in most cases.

The review of multiple evolutionary searches and multi-objective evolutionary algorithms

H. Rajabalipour Cheshmehgaz, H. Haron, and A. Sharifi. Artificial Intelligence Review43(3), 311-343. 2015.

Over the past decade, subdividing evolutionary search into multiple local evolutionary searches has been identified as an effective method to search for optimal solutions of multi-objective optimization problems (MOPs). The existing multi-objective evolutionary algorithms that benefit from the multiple local searches (multiple-MOEAs or MMOEAs) use different dividing methods and/or collaborations (information sharing) strategies between the created divisions. Their local evolutionary searches are implicitly or explicitly guided toward a part of global optimal solutions instead of converging to local ones in some divisions. In this reviewed paper, the dividing methods and the collaborations strategies are reviewed, while their advantage and disadvantage are mentioned.

A Novel Job Rotation Schedule Model Regarding Posture Variety Solving by a Cellular Genetic Algorithm

H. Rajabalipour Cheshmehgaz and H. Haron. Journal of Computing 3(6), 67-75. 2011.

Job rotation is a known method that is often used to reduce monotonous workloads on workers with repetitive workstation-based jobs. Changes in a worker’s body posture can contribute to reduce the monotony; particularly, while there exists none or only minimal external force exertion. The purpose of this research is to develop a method to incorporate posture variety, individually, for each particular body area, into the rotation. This method can increase the possibility of having overall posture variety during work-hours or shift-by-shift for workers. To this end, fuzzy dissimilarity magnitudes between two jobs based on linguistic variables are defined and then used to propose new criteria. According to the criteria, an integer-programming model for the rotation is developed. Owing to the large search space in which to find a very good solution (approximated optimum solution), a conventional genetic algorithm and a customized cellular genetic algorithm are employed and compared. In addition to being intuitively logical, the algorithms are examined in a simplified test case with six different assembly jobs (performing assigned tasks repetitively), and the results indicate that the cellular genetic algorithm can efficiently find better job rotation schedules to satisfy the criteria.

Minimum Vertex Cover Problem for Interdependent Networks with Cascading Failures

A. Veremyev, A. Sorokin, V. Boginski, and E.L. Pasiliao. European Journal of Operations Research 232, 499-511. 2014.

This paper defines and analyzes a generalization of the classical minimum vertex cover problem to the case of two-layer interdependent networks with cascading node failures that can be caused by two common types of interdependence. Previous studies on interdependent networks mainly addressed the issues of cascading failures from a numerical simulations perspective, whereas this paper proposes an exact optimization-based approach for identifying a minimum-cardinality set of nodes whose deletion would effectively disable both network layers through cascading failure mechanisms. We analyze the computational complexity and linear 0–1 formulations of the defined problems, as well as prove an LP approximation ratio result that generalizes the well-known 2-approximation for the classical minimum vertex cover problem. In addition, we introduce the concept of a “depth of cascade” (i.e., the maximum possible length of a sequence of cascading failures for a given interdependent network) and show that, for any problem instance, this parameter can be explicitly derived via a polynomial-time procedure.

Topology Design for On-Demand Dual-Path Routing in Wireless Networks

M. Carvalho, A. Sorokin, V. Boginski, and B. Balasundaram. Optimization Letters 7, 695-707. 2013.

One way to achieve reliability with low-latency is through multi-path routing and transport protocols that build redundant delivery channels (or data paths) to reduce end-to-end packet losses and retransmissions. However, the applicability and effectiveness of such protocols are limited by the topological constraints of the underlying communication infrastructure. Multiple data delivery paths can only be constructed over networks that are capable of supporting multiple paths. In mission-critical wireless networks, the underlying network topology is directly affected by the terrain, location and environmental interferences. However, the settings of the wireless radios at each node can be properly configured to compensate for these effects for multi-path support. In this work we investigate optimization models for topology designs that enable end-to-end dual-path support on a distributed wireless sensor network. We consider the case of a fixed sensor network with isotropic antennas, in which the control variable for topology management is the transmission power on network nodes. For optimization modeling, the network metrics of relevance are coverage, robustness, and power utilization. The optimization models proposed in this work eliminate some of the typical assumptions made in the pertinent network design literature that are too strong in this application context.

Trafforithm: A Traffic-aware Shortest Path Algorithm in Real Road Networks with Traffic Influence Factors

Lin Qi and Markus Schneider. 1st Int. Conference on Geographical Information Systems Theory, Applications and Management (GISTAM) 105-112. 2015.

The shortest path computation between two given locations in a road network is an important problem that finds applications in a wide range of fields. There has been a lot of research effort targeting the preciseness and performance of finding the shortest paths in road networks. However, few of them have really taken into account the influence of traffic factors such as traffic lights, road conditions, traffic jams, and turning cost. In other words, existing approaches are rather purely based on the topology of the network, but forgot that there are multiple factors in a real road network that impact the accuracy of the algorithm. The contribution of our paper is twofold. First, we present a generic two-layered framework for moving objects in road networks environment and demonstrate the important role of traffic factors on path finding and route planning. Second, we develop an efficient parallel shortest path algorithm in road networks with the consideration of traffic influence factors. Detailed analysis presented shows that our parallel TRAFFic-aware shortest path algORITHM (Trafforithm) is accurate and practical.

Realtime Response of Shortest Path Computation

Lin Qi and Markus Schneider. 7th ACM SIGSPATIAL IWCTS 41-46. 2014.

Computing the shortest path between two locations in a network is an important and fundamental problem that finds applications in a wide range of fields. This problem has attracted considerable research interest and led to a plethora of algorithms. However, existing approaches have two main drawbacks: complete path computation before movement and re-processing when node failure occurs. In this paper, two novel algorithms, RSP (Realtime Shortest Path) and RSP+ (Realtime Shortest Path Plus), are proposed to handle both shortcomings. We perform a network pre-processing to ensure a constant time response of retrieving the shortest route for an arbitrary node to an important set of destinations. RSP+ further divides the complete path into smaller partial paths, which can then be computed in parallel. Besides considering the continuous changes of the network, like traffic jams and road constructions, where certain paths are blocked, a fast recovery method to efficiently find the best alternative route is integrated into RSP+. Empirical studies have shown that RSP+ can achieve an average query processing time of 0.8 microseconds. The effectiveness of the recovery mechanism demonstrates that alternative routes can be obtained to avoid unavailable areas.

MONET: Modeling and Querying Moving Objects in Spatial Networks

Lin Qi and Markus Schneider. 3rd ACM SIGSPATIAL IWGS 48-57. 2012.

Data about moving objects is being collected in many different application domains with the help of sensor networks and GPS-enabled devices. In most cases, the moving objects are not free to move; they are usually restricted by some spatial constraints such as spatial networks. Spatial networks are ubiquitous and have been widely used in transportation, traffic planning, and navigation, as well as in geographical information system (GIS) applications. In most scenarios, moving objects such as vehicles move along predefined spatial networks like transportation networks. Unfortunately, the concepts for modeling and querying objects in unconstrained spaces like an outdoor space cannot be transferred to constrained spaces like a road network due to the different features of the environments in which the spatial objects move. Further, modern positioning devices as well as mobile and sensor technology have led to large volumes of moving objects in spatial networks. Therefore, we need a database-friendly data model to explicitly model spatial networks and, more importantly, describe relative movements in these networks. In this paper, we propose a new two-layered data model called MONET (Moving Objects in NETworks) model. The lower layer is a data model for spatial networks. This data model is the prerequisite for the upper model that represents moving objects in these networks. This layered model fits well to formulate relationships between moving objects and a network in queries. A query language, called MONET QL (MONET Query Language), allows a clear description of and access to moving objects in spatial networks that provides high-level operations on them..

Optimal relay node placement in delay constrained wireless sensor network design

Ashutosh Nigam and Yogesh K. Agarwal. European Journal of Operational Research 233(1), 220-233. 2014.

The Delay Constrained Relay Node Placement Problem (DCRNPP) frequently arises in the wireless sensor network (WSN) design. In WSN, sensor nodes are placed across a target geographical region to detect relevant signals. These signals are communicated to a central location, known as the base station, for further processing. The DCRNPP aims to place the minimum number of additional relay nodes at a subset of candidate relay node locations in such a manner that signals from various Sensor nodes can be communicated to the base station within a pre-specified delay bound. In this paper, we study the structure of the projection polyhedron of the problem and develop valid inequalities in form of the node-cut inequalities. We also derive conditions under which these inequalities are facet defining for the projection polyhedron. We formulate a branch-and-cut algorithm, based upon the projection formulation, to solve DCRNPP optimally. A Lagrangian relaxation based heuristic is used to generate a good initial solution for the problem that is used as an initial incumbent solution in the branch-and-cut approach. Computational results are reported on several randomly generated instances to demonstrate the efficacy of the proposed algorithm.

Ant colony optimization for index fund problem

Ashutosh Nigam and Yogesh K. Agarwal. Journal of Applied Operational Research 5.3, 96-104. 2013.

The Index Fund Problem (IFP) frequently arises in the financial optimization, particularly in the field of portfolio management. The index fund strategy tries to replicate the movement of benchmark indices such as S&P500, NIFTY50, and NASDAQ. There have been empirical evidences to suggest that index funds overperform most of the actively managed stock portfolios. IFP is defined as following: given a set of n stocks in the market index, the objective is to select m

Parallel batch scheduling of equal-length jobs with release and due dates

A. Condotta, S. Knust, and N. V. Shakhlevich. Journal of Scheduling 463-477. 2010

In this paper we study parallel batch scheduling problems with bounded batch capacity and equal-length jobs in a single and parallel machine environment. It is shown that the feasibility problem 1|p-batch,b

Scheduling coupled-operation jobs with exact time-lags

A. Condotta and N.V. Shakhlevich. Discrete Applied Mathematics, 2370-2388. 2012.

Scheduling coupled-operation jobs with exact time-lags on a single machine has a wide range of applications. In this problem, each job consists of two operations with given processing times, which should be scheduled on a single machine observing a given time-lag. The general case of the problem with arbitrary processing times of operations and arbitrary time lags is known to be NP-hard in the strong sense and the problem remains NP-hard for many special cases. In order to develop a local search algorithm for the problem, we first explore two possible approaches for representing feasible solutions and their neighborhoods based on maintaining a permutation of first operations of the jobs or maintaining a full permutation of all operations. The first representation appears to be unpromising since, as we prove, the problem of finding an optimal sequence of second operations for a fixed sequence of first operations is NP-hard in the strong sense even in the case of unit processing times. We elaborate the second approach by developing a tabu search heuristic based on efficient job re-insertion. Empirical evaluation demonstrates superiority of the developed algorithm in comparison with the earlier published algorithms.

Data Collection Maximization in Renewable Sensor Networks via Time-Slot Scheduling

Xiaojiang Ren, Weifa Liang, and Wenzheng Xu. IEEE Trans. Computers 64(7): 1870-1883, 2015.

In this paper we study data collection in an energy renewable sensor network for scenarios such as traffic monitoring on busy highways, where sensors are deployed along a predefined path (the highway) and a mobile sink travels along the path to collect data from one-hop sensors periodically. As sensors are powered by renewable energy sources, time-varying characteristics of ambient energy sources pose great challenges in the design of efficient routing protocols for data collection in such networks. In this paper we first formulate a novel data collection maximization problem by adopting multi-rate data transmissions and performing transmission time slot scheduling, and show that the problem is NP-hard. We then devise an offline algorithm with a provable approximation ratio for the problem by exploiting the combinatorial property of the problem, assuming that the harvested energy at each node is given and link communications in the network are reliable. We also extend the proposed algorithm by minor modifications to a general case of the problem where the harvested energy at each sensor is not known in advance and link communications are not reliable. We thirdly develop a fast, scalable online distributed algorithm for the problem in realistic sensor networks in which neither the global knowledge of the network topology nor sensor profiles such as sensor locations and their harvested energy profiles is given. Furthermore, we consider a special case of the problem in which each node has only a fixed transmission power, for which we propose an exact solution to the problem. We finally conduct extensive experiments by simulations to evaluate the performance of the proposed algorithms. Experimental results demonstrate that the proposed algorithms are efficient and the solutions obtained are fractional of the optimum.

Quality-Aware Target Coverage in Energy Harvesting Sensor Networks

Xiaojiang Ren, Weifa Liang, and Wenzheng Xu. IEEE Trans. Emerging Topics Comput. 3(1): 8-21. 2015.

Sensing coverage is a fundamental problem in wireless sensor networks for event detection, environment monitoring, and surveillance purposes. In this paper, we study the sensing coverage problem in an energy harvesting sensor network deployed for monitoring a set of targets for a given monitoring period, where sensors are powered by renewable energy sources and operate in duty-cycle mode, for which we first introduce a new coverage quality metric to measure the coverage quality within two different time scales. We then formulate a novel coverage quality maximization problem that considers both sensing coverage quality and network connectivity that consists of active sensors and the base station. Due to the NP-hardness of the problem, we instead devise efficient centralized and distributed algorithms for the problem, assuming that the harvesting energy prediction at each sensor is accurate during the entire monitoring period. Otherwise, we propose an adaptive framework to deal with energy prediction fluctuations, under which we show that the proposed centralized and distributed algorithms are still applicable. We finally evaluate the performance of the proposed algorithms through experimental simulations. Experimental results demonstrate that the proposed solutions are promising.

Monitoring Quality Maximization through Fair Rate Allocation in Harvesting Sensor Networks

Weifa Liang, Xiaojiang Ren, Xiaohua Jia, and Xu Xu. IEEE Trans. Parallel Distrib. Syst.Syst. 24(9): 1827-1840. 2013.

In this paper, we consider an energy harvesting sensor network in which sensors are powered by reusable energy such as solar energy, wind energy, and so on from their surroundings. We first formulate a novel monitoring quality maximization problem that aims to maximize the quality (rather than the quantity) of collected data by incorporating spatial data correlation among sensors. An optimization framework consisting of dynamic rate weight assignment, fair data rate allocation, and flow routing for the problem is proposed. To fairly allocate sensors with optimal data rates and efficiently route sensing data to the sink, we then introduce a weighted, fair data rate allocation and flow routing problem, subject to energy budgets of sensors. Unlike the most existing work that formulated the similar problem as a linear programming (LP) and solved the LP, we develop fast approximation algorithms with provable approximation ratios through exploiting the combinatorial property of the problem. A distributed implementation of the proposed algorithm is also developed. The key ingredients in the design of algorithms include a dynamic rate weight assignment and a reduction technique to reduce the problem to a special maximum weighted concurrent flow problem, where all source nodes share the common destination. We finally conduct extensive experiments by simulation to evaluate the performance of the proposed algorithm. The experimental results demonstrate that the proposed algorithm is very promising, and the solution to the weighted, fair data rate allocation and flow routing problem is fractional of the optimum.

Maximizing charging throughput in rechargeable sensor networks

Xiaojiang Ren, Weifa Liang, and Wenzheng Xu. 23rd International Conference on Computer Communication and Networks (ICCCN 2014). 1-8.

Energy is one of the most critical optimization objectives in wireless sensor networks. Compared with renewable energy harvesting technology, wireless energy transfer based on magnetic resonant coupling is able to provide more reliable energy supplies for sensors in wireless rechargeable sensor networks. The adoption of wireless mobile chargers (mobile vehicles) to replenish sensors’ energy has attracted much attention recently by the research community. Most existing studies assume that the energy consumption rates of sensors in the entire network lifetime are fixed or given in advance, and no constraint is imposed on the mobile charger (e.g., its travel distance per tour). In this paper, we consider the dynamic sensing and transmission behaviors of sensors by providing a novel charging paradigm and proposing efficient sensor charging algorithms. Specifically, we first formulate a charging throughput maximization problem. Since the problem is NP-hard, we then devise an offline approximation algorithm and online heuristics for it. We finally conduct extensive experimental simulations to evaluate the performance of the proposed algorithms. Experimental results demonstrate that the proposed algorithms are efficient.

Exploiting mobility for quality-maximized data collection in energy harvesting sensor networks

Xiaojiang Ren and Weifa Liang. IEEE Annual International Symposium on Personal, Indoor, and Mobile Radio Communication (PIMRC 2014). 1274-1278.

With the advance of energy harvesting technology, more and more sensors now are powered by ambient energy. Energy harvesting sensor networks are a key step in paving the way for truly green systems that can operate perpetually and do not adversely impact the environment. In this paper we consider quality data collection in an energy harvesting sensor network by exploring sink mobility. That is, we consider a mobile sink traveling along a to-be-found trajectory for data collection, subject to a specified tolerant delay constraint. We first formulate this optimization problem as a data quality maximization problem. Since the problem is NP-hard, we then devise a scalable heuristic solution. Also, a distributed implementation of the proposed algorithm is developed too. We finally conduct extensive experiments by simulation to evaluate the performance of the proposed algorithms. Experimental results demonstrate that the proposed algorithms are promising and very efficient.

Use of a Mobile Sink for Maximizing Data Collection in Energy Harvesting Sensor Networks

Xiaojiang Ren, Weifa Liang, and Wenzheng Xu. 42nd International Conference on Parallel Processing (ICPP 2013). 439-448.

In this paper we study data collection in an energy harvesting sensor network for traffic monitoring and surveillance purpose on busy highways, where sensors are densely deployed along a pre-defined path and a mobile sink travels along the path to collect data from one-hop sensors periodically. As the sensors are powered by renewable energy sources, the time-varying characteristics of energy harvesting poses great challenges on the design of efficient routing protocols for data collection in such energy harvesting sensor networks. In this paper we first formulate a novel data collection maximization problem that deals with multi-rate transmission mechanism and transmission time slot scheduling among the sensors. We then show the NP-hardness of the problem and devise an offline algorithm with a provable approximation ratio for the problem by exploiting the combinatorial property of the problem, assuming that the global knowledge of the network topology and the profile of each sensor are given. We also develop a fast, scalable online distributed solution for the problem without the global knowledge assumption, which is more suitable for real distributive sensor networks. In addition, we consider a special case of the problem for which an optimal polynomial solution is given. We finally conduct extensive experiments by simulations to evaluate the performance of the proposed algorithms. Experimental results demonstrate that the proposed algorithms are very efficient, and the solutions are fractional of the optimum.

The use of a mobile sink for quality data collection in energy harvesting sensor networks

Xiaojiang Ren and Weifa Liang. 2013 IEEE Wireless Communications and Networking Conference (WCNC) 1145-1150.

In this paper we study data collection in an energy harvesting sensor network where sensors are deployed along a given path and a mobile sink travels along the path periodically for data collection. Such a typical application scenario is to employ a mobile vehicle for traffic surveillance of a given highway. As the sensors in this network are powered by renewable energy sources, the time-varying characteristics of energy harvesting poses great challenges on the design of efficient routing protocols for data collection in harvesting sensor networks. In this paper, we first formulate a novel optimization problem as a network utility maximization problem by incorporating multi-rate communication mechanism between sensors and the mobile sink and show the NP-hardness of the problem. We then devise a novel centralized algorithm for it, assuming that the global knowledge of the entire network is available. We also develop a distributed solution to the problem without the global knowledge assumption. We finally conduct extensive experiments by simulations to evaluate the performance of the proposed algorithms. The experimental results demonstrate that the proposed algorithms are promising and very efficient.

Delay-tolerant data gathering in energy harvesting sensor networks with a mobile sink

Xiaojiang Ren and Weifa Liang. 2012 IEEE Global Communications Conference (GLOBECOM). 93-99.

In this paper we consider data collection in an energy harvesting sensor network with a mobile sink, in which a mobile sink travels along a trajectory for data collection subject to a specified tolerant delay constraint T. The problem is to find an optimal close trajectory for the mobile sink that consists of sojourn locations and the sojourn time at each location such that the network throughput is maximized, assuming that the mobile sink can only collect data from one-hop sensors, for which we first show that the problem is NP-hard. We then devise novel heuristic algorithms. We finally conduct extensive experiments to evaluate the performance of the proposed algorithms. We also investigate the impact of different parameters on the performance. The experimental results demonstrate that the proposed algorithms are efficient. To the best of our knowledge, this is the first kind of work of data collection for energy harvesting sensor networks with mobile sinks.