Today, many small aerodromes struggle withfinancial difficulties, and a large cost is air traffic control.Remote tower centers, which remotely provide air traffic servicesto aerodromes, can help reduce this cost. Each center maycontain a number of remote tower modules, where each moduleis manned by a controller that can handle one or moreaerodromes. In this paper we present the remote tower centerconcept and develop a model that optimizes the assignment ofairports to the remote tower modules. Computational results fora possible scenario based on real data for Swedish airports arepresented.
We present an application of Integer Programming to the design of arrival routes for aircraft in a Terminal Maneuvering Area (TMA). We generate operationally feasible merge trees of curvature-constrained routes, using two optimization criteria: (1) total length of the tree, and (2) distance flown along the tree paths. The output routes guarantee that the overall traffic pattern in the TMA can be monitored by air traffic controllers; in particular, we keep merge points for arriving aircraft well separated, and we exclude conflicts between arriving and departing aircraft. We demonstrate the feasibility of our method by experimenting with arrival routes for a runway at Arlanda airport in the Stockholm TMA. Our approach can easily be extended in several ways, e.g., to ensure that the routes avoid no-fly zones.
This paper investigates algorithmic questions related to the possibility of managing UAV traffic with beacon-based navigation, which we dub BBTM - Beacon-Based Traffic Management. The specific problem addressed is: How to install the minimum number of beacons in a mountainous terrain to ensure connectivity among a given set of UAS terminals on the terrain? BBTM is relevant for low-cost UAVs operating in remote areas not on time-critical missions, and may also be used as a backup system for better-equipped UAS in case the precise positioning or control information is lost, spoofed or jammed. We give algorithms for the beacon tower placement and evaluate their performance both on synthetic and real-world terrain data; the experiments suggest that our solutions can be used to efficiently quantify costs of establishing direct-visibility routing networks for UAS management.
We answer the question initially posed by Arik Tamir at the Fourth NYU Computational Geometry Day (March, 1987): “Given a collection of compact sets, can one decide in polynomial time whether there exists a convex body whose boundary intersects every set in the collection?”
We prove that when the sets are segments in the plane, deciding existence of the convex stabber is NP-hard. The problem remains NP-hard if the sets are regular polygons. We also show that in 3D the stabbing problem is hard when the sets are balls. On the positive side, we give a polynomial-time algorithm to find a convex transversal of a maximum number of pairwise-disjoint segments in 2D if the vertices of the transversal are restricted to a given set of points. Our algorithm also finds a convex stabber of the maximum number of a set of convex pseudodisks in the plane.
The stabbing problem is related to “convexity” of point sets measured as the minimum distance by which the points must be shifted in order to arrive in convex position; we give a PTAS to find the minimum shift in 2D, and a 2-approximation in any dimension. We also consider stabbing with vertices of a regular polygon – a problem closely related to approximate symmetry detection.
We show how to preprocess a polygonal domain with a xed starting point s in order to answer eciently the following queries: Given a point q, how should one move from s in order to see q as soon as possible? This query resembles the well-known shortestpath- to-a-point query, except that the latter asks for the fastest way to reach q, instead of seeing it. Our solution methods include a data structure for a di erent generalization of shortest-path-to-a-point queries, which may be of independent interest: to report eciently a shortest path from s to a query segment in the domain.
With the ubiquitous adoption of smartphones and mobile devices, it is now common practice for ones location to be sensed, collected and likely shared through social platforms. While such data can be helpful for many applications, users start to be aware of the privacy issue in handling location and trajectory data. While some users may voluntarily share their location information (e.g., for receiving location-based services, or for crowdsourcing systems), their location information may lead to information leaks about the whereabouts of other users, through the co-location of events when two users are at the same location at the same time and other side information, such as upper bounds of movement speed. It is therefore crucial to understand how much information one can derive about others positions through the co-location of events and occasional GPS location leaks of some of the users. In this paper we formulate the problem of inferring locations of mobile agents, present theoretically-proven bounds on the amount of information that could be leaked in this manner, study their geometric nature, and present algorithms matching these bounds. We will show that even if a very weak set of assumptions is made on trajectories patterns, and users are not obliged to follow any reasonable patterns, one could infer very accurate estimation of users locations even if they opt not to share them. Furthermore, this information could be obtained using almost linear-time algorithms, suggesting the practicality of the method even for huge volumes of data.
For a polygonal domain with h holes and a total of n vertices, we present algorithms that compute the L1 geodesic diameter in O(n2+h4) time and the L1 geodesic center in O((n4+n2h4) (n)) time, where (·) denotes the inverse Ackermann function. No algorithms were known for these problems before. For the Euclidean counterpart, the best algorithms compute the geodesic diameter in O(n7.73) or O(n7(h+log n)) time, and compute the geodesic center in O(n12+) time. Therefore, our algorithms are much faster than the algorithms for the Euclidean problems. Our algorithms are based on several interesting observations on L1 shortest paths in polygonal domains.
We study the minimum backlog problem (MBP). This online problem arises, e.g., in the context of sensor networks. We focus on two main variants of MBP. The discrete MBP is a 2-person game played on a graph G = (V, E). The player is initially located at a vertex of the graph. In each time step, the adversary pours a total of one unit of water into cups that are located on the vertices of the graph, arbitrarily distributing the water among the cups. The player then moves from her current vertex to an adjacent vertex and empties the cup at that vertex. The players objective is to minimize the backlog, i.e., the maximum amount of water in any cup at any time. The geometric MBP is a continuous-time version of the MBP: the cups are points in the two-dimensional plane, the adversary pours water continuously at a constant rate, and the player moves in the plane with unit speed. Again, the players objective is to minimize the backlog. We show that the competitive ratio of any algorithm for the MBP has a lower bound of Omega (D), where D is the diameter of the graph (for the discrete MBP) or the diameter of the point set (for the geometric MBP). Therefore we focus on determining a strategy for the player that guarantees a uniform upper bound on the absolute value of the backlog. For the absolute value of the backlog there is a trivial lower bound of Omega (D), and the deamortization analysis of Dietz and Sleator gives an upper bound of O (D log N) for N cups. Our main result is a tight upper bound for the geometric MBP: we show that there is a strategy for the player that guarantees a backlog of O(D), independently of the number of cups. We also study a localized version of the discrete MBP: the adversary has a location within the graph and must act locally (filling cups) with respect to his position, just as the player acts locally (emptying cups) with respect to her position. We prove that deciding the value of this game is PSPACE-hard. (C) 2015 Elsevier B.V. All rights reserved.
This paper provides a mathematical method for airspace capacity estimation. It is motivated by the need to assess the impact of unmanned aircraft systems on low altitude airspace operations. We define capacity as a minimum of metric-specific phase transition thresholds. The definition is flexible to accommodate a wide variety of metrics defined for the airspace and hence, can be used to compare different unmanned traffic management system approaches. We provide a proof of concept using a metric based on the size of de-confliction problems. The probability of occurrence of large conflicts show phase transition as the traffic density is increased. The traffic density at phase transition i.e. the metric-specific capacity measure, increases with decreasing minimum separation tolerance. Traffic management systems which allow for a higher proximity between aircraft should therefore improve the airspace capacity. Further work must incorporate a wider range of metrics and sense and avoid algorithms for a more rigorous validation and application of our airspace capacity estimation method.
n/a
We describe an analytical process to determine how much UAS traffic is feasible. The process is a simulator and data processing tools. The two are applied to the US San Francisco Bay Area and Norrkoping, Sweden. The amount of UAS traffic is measured in flights per day and simulated up to 200,000 flights. A UAS traffic volume is feasible if specified metrics meet operational requirements with high probability and are stable, in the sense of being below thresholds observed for monotone properties in random geometric graphs. We focus on conflict cluster size and argue for it as a fundamental safety metric worthy of extraordinary consideration.
The next generation of mobile networks, namely SG, together with the Internet of Things (IoT) come with a large number of delay sensitive services. To meet their requirements, cloud services are migrating to the edge of the networks to reduce latency. The notion of fog computing, where the edge plays an active role in the execution of services, comes to meet the needs for the stringent requirements. Thus, it becomes of a high importance to address the problem of mapping services demands to infrastructure resources supply. This work addresses it taking into account the randomness of resource availability in a fog infrastructure. We introduce an integer optimization formulation to minimize the total cost under a guarantee of service execution despite the uncertainty of resources availability. Our results illustrate the effect of various system parameters, such as the diversity of the infrastructure server set, the availability of different infrastructure servers in the set, and the probability of service completion required by each service.
In the relay placement problem, the input is a set of sensors and a number r >= 1, the communication range of a relay. In the one-tier version of the problem, the objective is to place a minimum number of relays so that between every pair of sensors there is a path through sensors and/or relays such that the consecutive vertices of the path are within distance r if both vertices are relays and within distance 1 otherwise. The two-tier version adds the restrictions that the path must go through relays, and not through sensors. We present a 3.11-approximation algorithm for the one-tier version and a polynomial-time approximation scheme (PTAS) for the two-tier version. We also show that the one-tier version admits no PTAS, assuming P not equal NP.
Considering potentials in the dual of a planar network has proved to be a powerful tool for computing planar maximum flows. In this paper we explore the use of potentials for giving algorithmic and combinatorial results on continuous flows in geometric domains -- a (far going) generalization of discrete flows in unit-capacity planar networks.
A continuous flow in a polygonal domain is a divergence-free vector field whose magnitude at every point is bounded by a given constant -- the domain's permeability. The flow enters the domain through a source edge on the outer boundary of the domain, and leaves through a sink edge on the boundary. The value of the flow is the total flow amount that comes in through the source (the same amount comes out of the sink, due to the vanishing divergence). The cost of the flow is the total length of its streamlines. The flow is called monotone if its streamlines are x-monotone curves.
Our main result is an algorithm to find (an arbitrarily close approximation to) the minimum-cost monotone flow in a polygonal domain, by formulating the problem as a convex program with an interesting choice of variables: one variable per hole in the domain. Our approach is based on the following flow of ideas: flow is the gradient of a potential function; a potential function can be extended from free space to holes so that it is constant over each hole; instead of the potential function (a value per point in the domain) we can thus speak of a potential vector (a value per hole); a potential uniquely (up to homotopic equivalence) defines "thick" paths in the domain; the paths define a flow. We show that the flow cost is convex in the potential; this allows us to employ the ellipsoid method to find the mincost flow, using holes potentials as variables. At each ellipsoid iteration the separation oracle is implemented by feasibility checks in the "critical graph" of the domain -- as we prove, the graph defines linear constraints on the potentials.
Using potentials and critical graphs we also prove MaxFlow/MinCut theorems for geometric flows -- both for monotone and for general ones. Formulating and proving the monotone version is a new result. The MaxFlow/MinCut theorem for non-monotone flows has been proved earlier by two different methods; the potentials technique provides a third proof.
In this paper, we study the robust and stochastic versions of the two-stage min-cut and shortest path problems introduced in Dhamdhere et al. (in How to pay, come what may: approximation algorithms for demand-robust covering problems. In: FOCS, pp 367–378, 2005), and give approximation algorithms with improved approximation factors. Specifically, we give a 2-approximation for the robust min-cut problem and a 4-approximation for the stochastic version. For the two-stage shortest path problem, we give a 3.39'>3.39 3.39 -approximation for the robust version and 6.78'>6.78 6.78 -approximation for the stochastic version. Our results significantly improve the previous best approximation factors for the problems. In particular, we provide the first constant-factor approximation for the stochastic min-cut problem. Our algorithms are based on a guess and prune strategy that crucially exploits the nature of the robust and stochastic objective. In particular, we guess the worst-case second stage cost and based on the guess, select a subset of costly scenarios for the first-stage solution to address. The second-stage solution for any scenario is simply the min-cut (or shortest path) problem in the residual graph. The key contribution is to show that there is a near-optimal first-stage solution that completely satisfies the subset of costly scenarios that are selected by our procedure. While the guess and prune strategy is not directly applicable for the stochastic versions, we show that using a novel LP formulation, we can adapt a guess and prune algorithm for the stochastic versions. Our algorithms based on the guess and prune strategy provide insights about the applicability of this approach for more general robust and stochastic versions of combinatorial problems.
Computing shortest paths in a polygonal do- main is a classic problem in computational geometry. Ecient algorithms for computing such paths use the continuous Dijk- stra paradigm [2], which not only allows one to nd the short- est path between two points but also computes the \shortest path map" from a given source|a structure enabling ecient queries of shortest paths to points in the domain.
Increasingly crowded airspaces are bringing the attention of the aviation community since diverse users try to make use of a common airspace. The challenge is to identify and implement technologies to effectively but also efficiently manage an integrated multi-aviation airspace. This paper briefly presents a comprehensive synopsis of efforts to unify air navigation regulations so that the airspace can be effectively shared by multiple aviation pilots and aircraft. It recalls proposals to enable aviation sectors to coexist in the same airspace. The focus of this paper is on insights into Knowledge Technologies (KTs) to facilitate procurement and management of a shared multi-aviation airspace. It discusses challenges and opportunities when implementing KT solutions to facilitate the integration of different aviation capabilities for a shared airspace. The analysis presented selectively explores software engineering for bottom-up/top-down integration along with artificial intelligence traits such as knowledge representation. Concluding remarks and the way forward to foster KT solutions for integration of multiple aviation in a single airspace are also presented.
—Remote Tower Service (RTSs) is one of the technological and operational solutions delivered for deployment by the Single European Sky ATM Research (SESAR) Programme. This new concept fundamentally changes how operators provide Air Traffic Services, as it becomes possible to control several airports from a single remote center. In such settings an air traffic controller works at a so-called “multiple position” in the remote center, that is, he/she handles two or more airports from one Remote Tower Module (RTM), i.e the controller working position. In this paper, we present an optimization framework for the traffic management at five Swedish airports that were chosen for remote operation using a Remote Tower Center designed to serve a number of airports. We highlight the problems experienced with real airport schedules, and present optimal assignments of the airports to the RTMs. We consider both scheduled traffic and special (non-scheduled) traffic at these five airports.
Remote Tower Service (RTS) is one of the technological and operational solutions delivered for deployment by the Single European Sky ATM Research (SESAR) Programme. This new concept fundamentally changes how operators provide Air Traffic Services, as it becomes possible to control several airports from a single remote center. In such settings an air traffic controller works at a so-called “multiple position” at the Remote Tower Center (RTC), which means that he/she can handle two or more airports from one Remote Tower Module (controller working position). In this paper, we present an optimization framework designed for automation of staff planning at the RTC. We highlight the problems experienced with real airport flight schedules, and present optimal shift assignments for five Swedish airports that were chosen for remote operation.
When a president is landing at a busy airport, the airspace around the airport closes for commercial traffic. We show how to schedule the presidential squadron so as to minimize its impact on scheduled civilian flights; to obtain an efficient solution we use a "rainbow" algorithm recoloring aircraft on the fly as they are stored in a special type of forest. We also give a data structure to answer the following query efficiently: Given the presidents ego (the requested duration of airspace closure), when would be the optimal time to close the airspace? Finally, we study the dual problem: Given the time when the airspace closure must start, what is the longest ego that can be tolerated without sacrificing the general traffic? We solve the problem by drawing a Christmas tree in a delay diagram; the tree allows one to solve also the query version of the problem. (C) 2015 Elsevier B.V. All rights reserved.
In a storyline visualization, we visualize a collection of interacting characters (e. g., in a movie, play, etc.) by x-monotone curves that converge for each interaction, and diverge otherwise. Given a storyline with n characters, we show tight lower and upper bounds on the number of crossings required in any storyline visualization for a restricted case. In particular, we show that if (1) each meeting consists of exactly two characters and (2) the meetings can be modeled as a tree, then we can always find a storyline visualization with O(n log n) crossings. Furthermore, we show that there exist storylines in this restricted case that require Omega(n log n) crossings. Lastly, we show that, in the general case, minimizing the number of crossings in a storyline visualization is fixedparameter tractable, when parameterized on the number of characters k. Our algorithm runs in time O(k!(2) k log k+ k!(2) m), where m is the number of meetings.
Resolving topography of the inner core boundary (ICB) and the structure and composition of the nearby region is key to improving our understanding of solidification of the Earths inner core. Observations of travel times and amplitudes of short-period seismic phases of PKiKP and PcP reflected, respectively, off the inner and outer boundary of the liquid core, provide essential constraints on the properties of this region. We revisit heterogeneities of ICB using a total of more than 1,300 new differential travel times and amplitude ratios of PKiKP and PcP measured at 3.2-35.2 degrees and reflected off the cores boundaries under Northeastern Asia and South America. We observe a statistically significant systematic bias between the measurements collected in the two spots. We carefully examine its origin in terms of contributions by various Earths shells and find that most of variance in PKiKP-PcP differential travel times measured above the epicentral distance of 16.5 degrees in Northeastern Asia can be accounted for by mantle corrections. We find slight disparity of about 1-3 km between the outer core thickness under Asia and America; the ICB density jump under Northeastern Asia is about 0.3 g/cm(3), which is three times as small as under South America. The findings are interpretable either as evidence for inner core hemispherical asymmetry, whereby crystallization dominates in the West and melting in the East (not vice versa), or in terms of two disconnected mosaic patches with contrasting properties.
A path or a polygonal domain is C-oriented if the orientations of its edges belong to a set of C given orientations; this is a generalization of the notable rectilinear case (C = 2). We study exact and approximation algorithms for minimum-link C-oriented paths and paths with unrestricted orientations, both in C-oriented and in general domains. Our two main algorithms are as follows: A subquadratic-time algorithm with a non-trivial approximation guarantee for general (unrestricted-orientation) minimum-link paths in general domains. An algorithm to find a minimum-link C-oriented path in a C-oriented domain. Our algorithm is simpler and more time-space efficient than the prior algorithm. We also obtain several related results: 3SUM-hardness of determining the link distance with unrestricted orientations (even in a rectilinear domain). An optimal algorithm for finding a minimum-link rectilinear path in a rectilinear domain. The algorithm and its analysis are simpler than the existing ones. An extension of our methods to find a C-oriented minimum-link path in a general (not necessarily C-oriented) domain. A more efficient algorithm to compute a 2-approximate C-oriented minimum-link path. A notion of "robust" paths. We show how minimum-link C-oriented paths approximate the robust paths with unrestricted orientations to within an additive error of 1.
We present a new algorithm for finding minimum-link rectilinear paths among h rectilinear obstacles with a total of n vertices in the plane. After the plane is triangulated, for any point s, our algorithm builds an O(n)-size data structure in O(n+h log h) time, such that given any query point t, we can compute a minimum-link rectilinear path from s to t in O(log n + k) time, where k is the number of edges of the path, and the query time is O(log n) if we only want to know the value k. The previously best algorithm solves the problem in O(n log n) time.
We present a new algorithm for finding minimum-link rectilinear paths among rectilinear obstacles in the plane. Given a triangulated rectilinear domain of h pairwise-disjoint rectilinear obstacles with a total of n vertices, our algorithm can find a minimum-link rectilinear path between any two points in O(n+hlogh) time. Further, within the same time our algorithm can build an O(n)-size data structure for any source point s, such that given any query point t, the number of edges of a minimum-link rectilinear path from s to t can be computed in O(logn) time and the actual path can be output in additional time linear in the number of the edges of the path. The previously best algorithms for the problems run in O(nlogn) time.
We present a new method for estimation of seismic coda shape. It falls into the same class of methods as non-parametric shape reconstruction with the use of neural network techniques where data are split into a training and validation data sets. We particularly pursue the well-known problem of image reconstruction formulated in this case as shape isolation in the presence of a broadly defined noise. This combined approach is enabled by the intrinsic feature of seismogram which can be divided objectively into a pre-signal seismic noise with lack of the target shape, and the remainder that contains scattered waveforms compounding the coda shape. In short, we separately apply shape restoration procedure to pre-signal seismic noise and the event record, which provides successful delineation of the coda shape in the form of a smooth almost non-oscillating function of time. The new algorithm uses a recently developed generalization of classical computational-geometry tool of alpha-shape. The generalization essentially yields robust shape estimation by ignoring locally a number of points treated as extreme values, noise or non-relevant data. Our algorithm is conceptually simple and enables the desired or pre-determined level of shape detail, constrainable by an arbitrary data fit criteria. The proposed tool for coda shape delineation provides an alternative to moving averaging and/or other smoothing techniques frequently used for this purpose. The new algorithm is illustrated with an application to the problem of estimating the coda duration after a local event. The obtained relation coefficient between coda duration and epicentral distance is consistent with the earlier findings in the region of interest.
Bloom filter (BF) based forwarding was proposed recently in several protocol alternatives to IP multicast. Some of these protocols avoid the state in intermediate routers and leave the burden of scalability management to the multicast source and end-hosts. Still, the existing BF-based protocols have scalability limitations and require explicit network management as well as non-trivial functionality from the network components. In this work we address the scalability limitations of the BF-based forwarding protocols by partitioning endhosts into clusters. We propose several algorithms to do the partitioning so as to decrease the overall traffic in the network. We evaluate our algorithms in a real Internet topology, demonstrating the ability of the proposed design to save up to 70% of traffic volume in the large-scale topology for big groups of subscribers, and up to 30% for small groups. (C) 2015 Elsevier B.V. All rights reserved.
We propose a novel approach of distance-based spatial clustering and contribute a heuristic computation of input parameters for guiding users in the search of interesting cluster constellations. We thereby combine computational geometry with interactive visualization into one coherent framework. Our approach entails displaying the results of the heuristics to users, as shown in Figure 1, providing a setting from which to start the exploration and data analysis. Addition interaction capabilities are available containing visual feedback for exploring further clustering options and is able to cope with noise in the data. We evaluate, and show the benefits of our approach on a sophisticated artificial dataset and demonstrate its usefulness on real-world data.
In this paper, we study strategies for allocating and managing friendly jammers, so as to create virtual barriers that would prevent hostile eavesdroppers from tapping sensitive wireless communication. Our scheme precludes the use of any encryption technique. Applications include domains such as (i) protecting the privacy of storage locations where RFID tags are used for item identication, (ii) secure reading of RFID tags embedded in credit cards, (iii) protecting data transmitted through wireless networks, sensor networks, etc. By carefully managing jammers to produce noise, we show how to reduce the SINR of eavesdroppers to below a threshold for successful reception, without jeopardizing network performance. We present algorithms targeted towards optimizing power allocation and number of jammers needed in several settings. Experimental simulations back up our results.
We present a MIP-based airspace sectorization framework for Terminal Maneuvering Areas that can enforce convex sectors. The approach integrates an airspace complexity representation, and the resulting sectorizations have a balanced taskload. We present results for Stockholm TMA; and compare our results to convex sectorizations obtained by enumerating all possible topologies for a given number of sectors.
The powerpoint presentation is about review of sectorization method that balances sector task load through extension by convex sectors, the results for Stockholm TMA. Also provides the comparison to convex sectorizations obtained by enumerating all possible topoligies for the given #sectors with highly flexible approach and fine-grained view on the TMA.
We investigate a possible scheme for management of conflicts among autonomous unmanned aerial vehicles (UAVs) in high-density very low level (VLL) uncontrolled airspace. The drones are modeled as disks of a given radius, moving along prescribed trajectories planned without any centralized coordination. Thus, during the motion, the disks may potentially come into contact, which represents loss of separation between the drones. Two overlapping disks get enclosed by a larger disk serving as the protected zone for avoidance maneuvers of all the drones inside it. When the conflict is gone, the disk is deactivated and the UAVs continue towards their destinations. We simulate traffic demand and the evolution of the de-confliction zones over a geographic area and present statistics associated with functioning of the system with and without ground delay to avoid take offs into conflicts. The scheme shows promise and is a good approach to explore further in future work.
The plenary talk at DASC 2016 by Dr. Parimal Kopardekar, the Principal Investigator of NASA UTM program, highlighted understanding the role of volume, noise and spectrum considerations in airspace demand-capacity modeling as the three requests from UTM developers to the avionics research community [1]. This paper proposes initial answers to all three requests, for the case of unmanned aerial vehicles (UAVs) operating in low-altitude uncontrolled airspace above populated areas: we estimate airspace capacity under several metrics centered on traffic volume manageability, drones noise pollution and spectrum demand. Our work aids in bridging regulators and the industry, by providing policy makers with decision support tools which help to quantify technological requirements which the manufacturers must follow in order to ensure seamless operation of small unmanned aerial systems (sUAS) in an urban airspace.
For a polygonal domain with h holes and a total of n vertices, we present algorithms that compute the geodesic diameter in time and the geodesic center in time, respectively, where denotes the inverse Ackermann function. No algorithms were known for these problems before. For the Euclidean counterpart, the best algorithms compute the geodesic diameter in or time, and compute the geodesic center in time. Therefore, our algorithms are significantly faster than the algorithms for the Euclidean problems. Our algorithms are based on several interesting observations on shortest paths in polygonal domains.