liu.seSearch for publications in DiVA
Change search
Refine search result
12 1 - 50 of 93
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • oxford
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function.
  • 1.
    Almquist, Mathias
    et al.
    Linköping University, Department of Computer and Information Science, Database and information techniques. Linköping University, Faculty of Science & Engineering.
    Almquist, Viktor
    Linköping University, Department of Computer and Information Science, Database and information techniques. Linköping University, Faculty of Science & Engineering.
    Krishnamoorthi, Vengatanathan
    Linköping University, Department of Computer and Information Science, Database and information techniques. Linköping University, Faculty of Science & Engineering.
    Carlsson, Niklas
    Linköping University, Department of Computer and Information Science, Database and information techniques. Linköping University, Faculty of Science & Engineering.
    Eager, Derek
    Univ Saskatchewan, Canada.
    The Prefetch Aggressiveness Tradeof in 360 degrees Video Streaming2018In: PROCEEDINGS OF THE 9TH ACM MULTIMEDIA SYSTEMS CONFERENCE (MMSYS18), ASSOC COMPUTING MACHINERY , 2018, p. 258-269Conference paper (Refereed)
    Abstract [en]

    With 360 degrees video, only a limited fraction of the full view is displayed at each point in time. This has prompted the design of streaming delivery techniques that allow alternative playback qualities to be delivered for each candidate viewing direction. However, while prefetching based on the users expected viewing direction is best done close to playback deadlines, large buffers are needed to protect against shortfalls in future available bandwidth. This results in conflicting goals and an important prefetch aggressiveness tradeoff problem regarding how far ahead in time from the current playpoint prefetching should be done. This paper presents the first characterization of this tradeoff. The main contributions include an empirical characterization of head movement behavior based on data from viewing sessions of four different categories of 360 degrees video, an optimization-based comparison of the prefetch aggressiveness tradeoffs seen for these video categories, and a data-driven discussion of further optimizations, which include a novel system design that allows both tradeoff objectives to be targeted simultaneously. By qualitatively and quantitatively analyzing the above tradeoffs, we provide insights into how to best design tomorrows delivery systems for 360 degrees videos, allowing content providers to reduce bandwidth costs and improve users playback experiences.

  • 2. Arlitt, Martin
    et al.
    Carlsson, Niklas
    University of Calgary.
    Leveraging Organizational Etiquette to Improve Internet Security2010In: Proc. IEEE International Conference on Computer Communication Networks (ICCCN ’10), IEEE , 2010, p. 1-6Conference paper (Refereed)
  • 3.
    Arlitt, Martin
    et al.
    HP Labs.
    Carlsson, Niklas
    Linköping University, The Institute of Technology. Linköping University, Department of Computer and Information Science, Database and information techniques.
    Gill, Phillipa
    University of Toronto.
    Mahanti, Aniket
    University of Calgary.
    Williamson, Carey
    University of Calgary.
    Characterizing Intelligence Gathering and Control on an Edge Network2011In: ACM Transactions on Internet Technology, ISSN 1533-5399, E-ISSN 1557-6051, Vol. 11, no 1Article in journal (Refereed)
    Abstract [en]

    here is a continuous struggle for control of resources at every organization that is connected to the Internet. The local organization wishes to use its resources to achieve strategic goals. Some external entities seek direct control of these resources, for purposes such as spamming or launching denial-of-service attacks. Other external entities seek indirect control of assets (e. g., users, finances), but provide services in exchange for them. less thanbrgreater than less thanbrgreater thanUsing a year-long trace from an edge network, we examine what various external organizations know about one organization. We compare the types of information exposed by or to external organizations using either active (reconnaissance) or passive (surveillance) techniques. We also explore the direct and indirect control external entities have on local IT resources.

  • 4.
    Arlitt, Martin
    et al.
    HP Labs.
    Carlsson, NiklasLinköping University, Department of Computer and Information Science, Database and information techniques.Hedge, NidhiTechnicolor.Wierman, AdamCalifornia Institute of Technology.
    ACM SIGMETRICS Performance Evaluation ReviewVolume 40 Issue 3, December 2012.: Special issue on the 2012 GreenMetrics workshop2013Conference proceedings (editor) (Refereed)
  • 5. Arlitt, Martin
    et al.
    Carlsson, NiklasRolia, Jerry
    GreenMetrics '09 Workshop Seattle, WA, June 2009.: in conjunction with ACM SIGMETRICS/Performance '09  (Proceedings appeared in ACM Performance Evaluation Review (PER), Special Issue on the 2009 GreenMetrics Workshop, 37, 4 (Mar. 2010).)2009Conference proceedings (editor) (Other academic)
  • 6. Arlitt, Martin
    et al.
    Carlsson, NiklasRolia, Jerry
    GreenMetrics '10 Workshop: In conjunction with ACM SIGMETRICS, New York, NY, June 2010. (Proceedings appeared in ACM Performance Evaluation Review (PER), Special Issue on the 2010 GreenMetrics Workshop, 38, 3 (Dec. 2010).) 2010Conference proceedings (editor) (Other academic)
  • 7. Arlitt, Martin
    et al.
    Carlsson, NiklasRolia, Jerry
    Proceedings of the Third GreenMetrics '11 Workshop, in conjunction with (and sponsored by) ACM SIGMETRICS.: ACM Performance Evaluation Review (PER), Special Issue on the 2011 GreenMetrics Workshop.  Volume 39, Issue 3, December 2011.2011Conference proceedings (editor) (Refereed)
  • 8.
    Arlitt, Martin
    et al.
    HP Labs; University of Calgary, Canada.
    Carlsson, Niklas
    Linköping University, Department of Computer and Information Science, Database and information techniques. Linköping University, The Institute of Technology.
    Williamson, Carey
    University of Calgary, Canada.
    Rolia, Jerry
    HP Labs.
    Passive Crowd-based Monitoring of World Wide Web Infrastructure and its Performance2012In: Proc. IEEE International Conference on Communications (ICC 2012), IEEE , 2012, p. 2689-2694Conference paper (Refereed)
    Abstract [en]

    The World Wide Web and the services it provides are continually evolving. Even for a single time instant, it is a complex task to methodologically determine the infrastructure over which these services are provided and the corresponding effect on user perceived performance. For such tasks, researchers typically rely on active measurements or large numbers of volunteer users. In this paper, we consider an alternative approach, which we refer to as passive crowd-based monitoring. More specifically, we use passively collected proxy logs from a global enterprise to observe differences in the quality of service (QoS) experienced by users on different continents. We also show how this technique can measure properties of the underlying infrastructures of different Web content providers. While some of these properties have been observed using active measurements, we are the first to show that many of these properties (such as location of servers) can be obtained using passive measurements of actual user activity. Passive crowd-based monitoring has the advantages that it does not add any overhead on Web infrastructure, it does not require any specific software on the clients, but still captures the performance and infrastructure observed by actual Web usage.

  • 9.
    Arvanitaki, Antonia
    et al.
    Linköping University, Department of Computer and Information Science, Software and Systems. Linköping University, Faculty of Science & Engineering.
    Pappas, Nikolaos
    Linköping University, Department of Science and Technology, Communications and Transport Systems. Linköping University, Faculty of Science & Engineering.
    Mohapatra, Parthajit
    Indian Inst Technol Tirupati, India.
    Carlsson, Niklas
    Linköping University, Department of Computer and Information Science, Database and information techniques. Linköping University, Faculty of Science & Engineering.
    Delay Performance of a Two-User Broadcast Channel with Security Constraints2018In: 2018 GLOBAL INFORMATION INFRASTRUCTURE AND NETWORKING SYMPOSIUM (GIIS), IEEE , 2018Conference paper (Refereed)
    Abstract [en]

    In this paper we consider the two-user broadcast channel with security constraints. We assume that one of the receivers has a secrecy constraint; i.e., its packets need to be kept secret from the other receiver. The receiver with secrecy constraint has full-duplex capability to transmit a jamming signal to increase its secrecy. We derive the average delay per packet and provide simulation and numerical results, where we compare different performance metrics for the cases when the legitimate receiver performs successive decoding and when both receivers treat interference as noise.

  • 10.
    Borghol, Youmna
    et al.
    NICTA, Australia; University of New South Wales, Sydney, NSW, Australia.
    Ardon, Sebastien
    NICTA, Alexandria, NSW, Australia .
    Carlsson, Niklas
    Linköping University, Department of Computer and Information Science, Database and information techniques. Linköping University, The Institute of Technology.
    Eager, Derek
    University of Saskatchewan, Canada.
    Mahanti, Anirban
    NICTA, Alexandria, NSW, Australia .
    The Untold Story of the Clones: Content-agnostic Factors that Impact YouTube Video Popularity2012In: Proc. ACM SIGKDD Conference on Knowledge Discovery and Data Mining (KDD) 2012, Association for Computing Machinery (ACM), 2012, p. 1186-1194Conference paper (Refereed)
    Abstract [en]

    Video dissemination through sites such as YouTube can have widespread impacts on opinions, thoughts, and cultures. Not all videos will reach the same popularity and have the same impact. Popularity differences arise not only because of differences in video content, but also because of other "content-agnostic" factors. The latter factors are of considerable interest but it has been difficult to accurately study them. For example, videos uploaded by users with large social networks may tend to be more popular because they tend to have more interesting content, not because social network size has a substantial direct impact on popularity.

    In this paper, we develop and apply a methodology that is able to accurately assess, both qualitatively and quantitatively, the impacts of various content-agnostic factors on video popularity. When controlling for video content, we observe a strong linear "rich-get-richer" behavior, with the total number of previous views as the most important factor except for very young videos. The second most important factor is found to be video age. We analyze a number of phenomena that may contribute to rich-get-richer, including the first-mover advantage, and search bias towards popular videos. For young videos we find that factors other than the total number of previous views, such as uploader characteristics and number of keywords, become relatively more important. Our findings also confirm that inaccurate conclusions can be reached when not controlling for content.

  • 11.
    Borghol, Youmna
    et al.
    NICTA, Alexandria, Australia.
    Ardon, Sebastien
    NICTA, Alexandria, Australia.
    Carlsson, Niklas
    University of Calgary, Canada.
    Mahanti, Anirban
    NICTA, Alexandria, Australia.
    Toward Efficient On-demand Streaming with BitTorrent2010In: NETWORKING 2010: 9th International IFIP TC 6 Networking Conference, Chennai, India, May 11-15, 2010. Proceedings / [ed] Mark Crovella, Laura Marie Feeney, Dan Rubenstein, S. V. Raghavan, Springer , 2010, p. 53-66Chapter in book (Refereed)
    Abstract [en]

    This paper considers the problem of adapting the BitTorrent protocol for on-demand streaming. BitTorrent is a popular peer-to-peer file sharing protocol that efficiently accommodates a large number of requests for file downloads. Two components of the protocol, namely the Rarest-First piece selection policy and the Tit-for-Tat algorithm for peer selection, are acknowledged to contribute toward the protocol's efficiency with respect to time to download files and its resilience to freeriders. Rarest-First piece selection, however, does not augur well for on-demand streaming. In this paper, we present a new adaptive Window-based piece selection policy that achieves a balance between the system scalability provided by the Rarest-First algorithm and the necessity of In-Order pieces for seamless media playback. We also show that this simple modification to the piece selection policy allows the system to be efficient with respect to utilization of available upload capacity of participating peers, and does not break the Tit-for-Tat incentive scheme which provides resilience to freeriders.

  • 12.
    Borghol, Youmna
    et al.
    NICTA, Australia.
    Mitra, Siddharth
    Indian Institute Technology Delhi.
    Ardon, Sebastien
    NICTA, Australia.
    Carlsson, Niklas
    Linköping University, The Institute of Technology. Linköping University, Department of Computer and Information Science, Database and information techniques.
    Eager, Derek
    University of Saskatchewan.
    Mahanti, Anirban
    NICTA, Australia.
    Characterizing and modelling popularity of user-generated videos2011In: Performance evaluation (Print), ISSN 0166-5316, E-ISSN 1872-745X, Vol. 68, no 11, p. 1037-1055Article in journal (Refereed)
    Abstract [en]

    This paper develops a framework for studying the popularity dynamics of user-generated videos, presents a characterization of the popularity dynamics, and proposes a model that captures the key properties of these dynamics. We illustrate the biases that may be introduced in the analysis for some choices of the sampling technique used for collecting data; however, sampling from recently-uploaded videos provides a dataset that is seemingly unbiased. Using a dataset that tracks the views to a sample of recently-uploaded YouTube videos over the first eight months of their lifetime, we study the popularity dynamics. We find that the relative popularities of the videos within our dataset are highly non-stationary, owing primarily to large differences in the required time since upload until peak popularity is finally achieved, and secondly to popularity oscillation. We propose a model that can accurately capture the popularity dynamics of collections of recently-uploaded videos as they age, including key measures such as hot set churn statistics, and the evolution of the viewing rate and total views distributions over time.

  • 13.
    Carlsson, Niklas
    Linköping University, Department of Computer and Information Science, Database and information techniques. Linköping University, The Institute of Technology.
    Broadening the Audience: Popularity Dynamics and Scalable Content Delivery2012In: Advances in secure and networked information systems: the ADIT perspective ; Festschrift in honor of professor Nahid Shahmehri / [ed] Patrick Lambrix, Linköping: Linköping University Electronic Press, 2012, p. 139-144Chapter in book (Other academic)
    Abstract [en]

    The Internet is playing an increasingly important role in today’s society and people are beginning to expect instantaneous access to information and content wherever they are. As content delivery is consuming a majority of the Internet bandwidth and its share of bandwidth is increasing by the hour, we need scalable and efficient techniques that can support these user demands and efficiently deliver the content to the users. When designing such techniques it is important to note that not all content is the same or will reach the same popularity. Scalable techniques must handle an increasingly diverse catalogue of contents, both with regards to diversity of content (as service are becoming increasingly personalized, for example) and with regards to their individual popularity. The importance of understanding content popularity dynamics is further motivated by popular contents widespread impact on opinions, thoughts, and cultures. This article will briefly discuss some of our recent work on capturing content popularity dynamics and designing scalable content delivery techniques

  • 14.
    Carlsson, Niklas
    Linköping University, Department of Computer and Information Science, Database and information techniques. Linköping University, Faculty of Science & Engineering.
    Optimized eeeBond: Energy Efficiency with non-Proportional Router Network Interfaces2016In: PROCEEDINGS OF THE 2016 ACM/SPEC INTERNATIONAL CONFERENCE ON PERFORMANCE ENGINEERING (ICPE'16), ACM Digital Library, 2016, p. 215-223Conference paper (Refereed)
    Abstract [en]

    The recent Energy Efficient Ethernet (EEE) standard and the eBond protocol provide two orthogonal approaches that allow significant energy savings on routers. In this paper we present the modeling and performance evaluation of these two protocols and a hybrid protocol. We first present eeeBond, pronounced ``triple-e bond'', which combines the eBond capability to switch between multiple redundant interfaces with EEE's active/idle toggling capability implemented in each interface. Second, we present an analytic model of the protocol performance, and derive closed-form expressions for the optimized parameter settings of both eBond and eeeBond. Third, we present a performance evaluation that characterizes the relative performance gains possible with the optimized protocols, as well as a trace-based evaluation that validates the insights from the analytic model. Our results show that there are significant advantages to combine eBond and EEE. The eBond capability provides good savings when interfaces only offer small energy savings when in short-term sleep states, and the EEE capability is important as short-term sleep savings improve.

  • 15.
    Carlsson, Niklas
    et al.
    Linköping University, The Institute of Technology. Linköping University, Department of Computer and Information Science, Database and information techniques.
    Arlitt, Martin
    Towards More Effective Utilization of Computer Systems2011In: Proc. ACM/SPEC International Conference on Performance Engineering (ICPE ’10), Karlsruhe, Germany, March 2011., ACM , 2011, p. 235-246Conference paper (Refereed)
  • 16.
    Carlsson, Niklas
    et al.
    Linköping University, Department of Computer and Information Science, Database and information techniques. Linköping University, The Institute of Technology.
    Dan, Gyorgy
    KTH Royal Institute of Technology, Stockholm.
    Eager, Derek
    University of Saskatchewan, Canada.
    Mahanti, Anirban
    NICTA, Sydney, Australia,.
    Tradeoffs in Cloud and Peer-assisted Content Delivery Systems2012In: Peer-to-Peer Computing (P2P), 2012: , IEEE , 2012, p. 249-260Conference paper (Refereed)
    Abstract [en]

    With the proliferation of cloud services, cloud-based systems can become a cost-effective means of on-line content delivery. In order to make best use of the available cloud bandwidth and storage resources, content distributors need to have a good understanding of the tradeoffs between various system design choices. In this work we consider a peer-assisted content delivery system that aims to provide guaranteed average download rate to its customers. We show that bandwidth demand peaks for contents with moderate popularity, and identify these contents as candidates for cloud-based service. We then consider dynamic content bundling and cross-swarm seeding, which were recently proposed to improve download performance, and evaluate their impact on the optimal choice of cloud service use. We find that much of the benefits from peer seeding can be achieved with careful torrent inflation, and that hybrid policies that combine bundling and peer seeding often reduce the delivery costs by 20% relative to only using seeding. Furthermore, all these peer-assisted policies reduce the number of files that would need to be pushed to the cloud. Finally, we show that careful system design is needed if locality is an important criterion when choosing cloud-based service provisioning.

  • 17.
    Carlsson, Niklas
    et al.
    Linköping University, Department of Computer and Information Science, Database and information techniques. Linköping University, The Institute of Technology.
    Dan, György
    KTH Royal Institute of Technology, Sweden.
    Arlitt, Martin
    NICTA, Australia.
    Mahanti, Anirban
    HP Labs, USA.
    A Longitudinal Characterization of Local and Global BitTorrent Workload Dynamics2012In: Passive and Active Measurement: 13th International Conference, PAM 2012, Vienna, Austria, March 12-14th, 2012. Proceedings / [ed] Nina Taft; Fabio Ricciato, Springer Berlin/Heidelberg, 2012, p. 252-262Conference paper (Refereed)
    Abstract [en]

    This book constitutes the refereed proceedings of the 13th International Conference on Passive and Active Measurement, PAM 2012, held in Vienna, Austria, in March 2012. <br>The 25 revised full papers presented were carefully reviewed and selected from 83 submissions. The papers were arranged into eight sessions traffic evolution and analysis, large scale monitoring, evaluation methodology, malicious behavior, new measurement initiatives, reassessing tools and methods, perspectives on internet structure and services, and application protocols.

  • 18.
    Carlsson, Niklas
    et al.
    University of Calgary.
    Eager, Derek
    Content Delivery using Replicated Digital Fountains2010In: Proc. IEEE/ACM International Symposium on Modeling, Analysis and Simulation of Computer and Telecommunication Systems (MASCOTS ’10), IEEE , 2010, p. 338-348Conference paper (Refereed)
  • 19.
    Carlsson, Niklas
    et al.
    Linköping University, Department of Computer and Information Science, Database and information techniques. Linköping University, Faculty of Science & Engineering.
    Eager, Derek
    University of Saskatchewan, Canada.
    Ephemeral Content Popularity at the Edge and Implications for On-Demand Caching2017In: IEEE Transactions on Parallel and Distributed Systems, ISSN 1045-9219, E-ISSN 1558-2183, Vol. 28, no 6, p. 1621-1634Article in journal (Refereed)
    Abstract [en]

    The ephemeral content popularity seen with many content delivery applications can make indiscriminate on-demand caching in edge networks highly inefficient, since many of the content items that are added to the cache will not be requested again from that network. In this paper, we address the problem of designing and evaluating more selective edge-network caching policies. The need for such policies is demonstrated through an analysis of a dataset recording YouTube video requests from users on an edge network over a 20-month period. We then develop a novel workload modelling approach for such applications and apply it to study the performance of alternative edge caching policies, including indiscriminate caching and cache on kth request for different k. The latter policies are found able to greatly reduce the fraction of the requested items that are inserted into the cache, at the cost of only modest increases in cache miss rate. Finally, we quantify and explore the potential room for improvement from use of other possible predictors of further requests. We find that although room for substantial improvement exists when comparing performance to that of a perfect "oracle" policy, such improvements are unlikely to be achievable in practice.

  • 20.
    Carlsson, Niklas
    et al.
    Linköping University, Department of Computer and Information Science, Database and information techniques. Linköping University, Faculty of Science & Engineering.
    Eager, Derek
    Univ Saskatchewan, Canada.
    Worst-case bounds and optimized cache on Mth request cache insertion policies under elastic conditions2018In: Performance evaluation (Print), ISSN 0166-5316, E-ISSN 1872-745X, Vol. 127, p. 70-92Article in journal (Refereed)
    Abstract [en]

    Cloud services and other shared third-party infrastructures allow individual content providers to easily scale their services based on current resource demands. In this paper, we consider an individual content provider that wants to minimize its delivery costs under the assumptions that the storage and bandwidth resources it requires are elastic, the content provider only pays for the resources that it consumes, and costs are proportional to the resource usage. Within this context, we (i) derive worst-case bounds for the optimal cost and competitive cost ratios of different classes of cache on Mth request cache insertion policies, (ii) derive explicit average cost expressions and bounds under arbitrary inter request distributions, (iii) derive explicit average cost expressions and bounds for short tailed (deterministic, Erlang, and exponential) and heavy-tailed (Pareto) inter-request distributions, and (iv) present numeric and trace-based evaluations that reveal insights into the relative cost performance of the policies. Our results show that a window-based cache on 2nd request policy using a single threshold optimized to minimize worst-case costs provides good average performance across the different distributions and the full parameter ranges of each considered distribution, making it an attractive choice for a wide range of practical conditions where request rates of individual file objects typically are not known and can change quickly. (C) 2018 Elsevier B.V. All rights reserved.

  • 21.
    Carlsson, Niklas
    et al.
    Linköping University, Department of Computer and Information Science, Database and information techniques. Linköping University, The Institute of Technology.
    Eager, Derek
    University of Saskatchewan, Canada.
    Gopinathan, Ajay
    Linköping University, Department of Computer and Information Science, Database and information techniques. Linköping University, The Institute of Technology.
    Li, Zongpeng
    University of Calgary, Canada.
    Caching and optimized request routing in cloud-based content delivery systems2014In: Performance evaluation (Print), ISSN 0166-5316, E-ISSN 1872-745X, Vol. 79, p. 38-55Article in journal (Refereed)
    Abstract [en]

    Geographically distributed cloud platforms enable an attractive approach to large-scale content delivery. Storage at various sites can be dynamically acquired from (and released back to) the cloud provider so as to support content caching, according to the current demands for the content from the different geographic regions.  When storage is sufficiently expensive that not all content should be cached at all sites, two issues must be addressed: how should requests for content be routed to the cloud provider sites, and what policy should be used for caching content using the elastic storage resources obtained from the cloud provider.  Existing approaches are typically designed for non-elastic storage and little is known about the optimal policies when minimizing the delivery costs for distributed elastic storage.

    In this paper, we propose an approach in which elastic storage resources are exploited using a simple dynamic caching policy, while request routing is updated periodically according to the solution of an optimization model.  Use of pull-based dynamic caching, rather than push-based placement, provides robustness to unpredicted changes in request rates.  We show that this robustness is provided at low cost \textendash{} even with fixed request rates, use of the dynamic caching policy typically yields content delivery cost within 10\% of that with the optimal static placement.  We compare request routing according to our optimization model to simpler baseline routing policies, and find that the baseline policies can yield greatly increased delivery cost relative to optimized routing.  Finally, we present a lower-cost approximate solution algorithm for our routing optimization problem that yields content delivery cost within 2.5\% of the optimal solution.

  • 22.
    Carlsson, Niklas
    et al.
    Linköping University, Department of Computer and Information Science, Database and information techniques. Linköping University, Faculty of Science & Engineering.
    Eager, Derek
    University of Saskatchewan, Canada.
    Krishnamoorthi, Vengatanathan
    Linköping University, Department of Computer and Information Science, Database and information techniques. Linköping University, Faculty of Science & Engineering.
    Polishchuk, Tatiana
    Linköping University, Department of Science and Technology, Communications and Transport Systems. Linköping University, Faculty of Science & Engineering.
    Optimized Adaptive Streaming of Multi-video Stream Bundles2017In: IEEE transactions on multimedia, ISSN 1520-9210, E-ISSN 1941-0077, Vol. 19, no 7, p. 1637-1653Article in journal (Refereed)
    Abstract [en]

    In contrast to traditional video, multi-view video streaming allows viewers to interactively switch among multiple perspectives provided by different cameras. One approach to achieve such a service is to encode the video from all of the cameras into a single stream, but this has the disadvantage that only a portion of the received video data will be used, namely that required for the selected view at each point in time. In this paper, we introduce the concept of a multi-video stream bundle that consists of multiple parallel video streams that are synchronized in time, each providing the video from a different camera capturing the same event or movie. For delivery we leverage the adaptive features and time-based chunking of HTTP-based adaptive streaming, but now employing adaptation in both content and rate. Users are able to change their viewpoint on-demand and the client player adapts the rate at which data are retrieved from each stream based on the users current view, the probabilities of switching to other views, and the users current bandwidth conditions. A crucial component of such a system is the prefetching policy. For this we present an optimization model as well as a simpler heuristic that can balance the playback quality and the probability of playback interruptions. After analytically and numerically characterizing the optimal solution, we present a prototype implementation and sample results. Our prefetching and buffer management solution is shown to provide close to seamless playback switching when there is sufficient bandwidth to prefetch the parallel streams.

  • 23.
    Carlsson, Niklas
    et al.
    University of Saskatchewan, Canada .
    Eager, Derek L.
    University of Saskatchewan, Canada .
    Modeling Priority-based Incentive Policies for Peer-assisted Content Delivery Systems2008In: NETWORKING 2008 Ad Hoc and Sensor Networks, Wireless Networks, Next Generation Internet: 7th International IFIP-TC6 Networking Conference Singapore, May 5-9, 2008, Proceedings / [ed] Amitabha Das, Hung Keng Pung, Francis BuSung Lee and Lawrence WaiChoong Wong, Springer Berlin/Heidelberg, 2008, p. 421-432Chapter in book (Refereed)
    Abstract [en]

    Content delivery providers can improve their service scalability and offload their servers by making use of content transfers among their clients. To provide peers with incentive to transfer data to other peers, protocols such as BitTorrent typically employ a tit-for-tat policy in which peers give upload preference to peers that provide the highest upload rate to them. However, the tit-for-tat policy does not provide any incentive for a peer to stay in the system beyond completion of its download.

    This paper presents a simple fixed-point analytic model of a priority-based incentive mechanism which provides peers with strong incentive to contribute upload bandwidth beyond their own download completion. Priority is obtained based on a peer's prior contribution to the system. Using a two-class model, we show that priority-based policies can significantly improve average download times, and that there exists a significant region of the parameter space in which both high-priority and low-priority peers experience improved performance compared to with the pure tit-for-tat approach. Our results are supported using event-based simulations.

  • 24.
    Carlsson, Niklas
    et al.
    University of Saskatchewan.
    Eager, Derek L.
    University of Saskatchewan.
    Non-Euclidian Geographic Routing in Wireless Networks2007In: Ad hoc networks, ISSN 1570-8705, E-ISSN 1570-8713, ISSN 1570-8705, Vol. 5, no 7, p. 1173-1193Article in journal (Refereed)
    Abstract [en]

    Greedy geographic routing is attractive for large multi-hop wireless networks because of its simple and distributed operation. However, it may easily result in dead ends or hotspots when routing in a network with obstacles (regions without sufficient connectivity to forward messages). In this paper we propose a distributed routing algorithm that combines greedy geographic routing with two non-Euclidean distance metrics, chosen so as to provide load balanced routing around obstacles and hotspots. The first metric, Local Shortest Path, is used to achieve high probability of progress, while the second metric, Weighted Distance Gain, is used to select a desirable node among those that provide progress. The proposed Load Balanced Local Shortest Path (LBLSP) routing algorithm provides loop freedom, guarantees delivery when a path exists, is able to efficiently route around obstacles, and provides good load balancing.

  • 25.
    Carlsson, Niklas
    et al.
    University of Saskatchewan, Canada.
    Eager, Derek L.
    University of Saskatchewan, Canada.
    Peer-assisted On-demand Streaming of Stored Media using BitTorrent-like Protocols2007In: NETWORKING 2007. Ad Hoc and Sensor Networks, Wireless Networks, Next Generation Internet: 6th International IFIP-TC6 Networking Conference, Atlanta, GA, USA, May 14-18, 2007. Proceedings / [ed] Ian F. Akyildiz, Raghupathy Sivakumar, Eylem Ekici, Jaudelice Cavalcantede Oliveira and Janise McNair, Springer Berlin/Heidelberg, 2007, p. 570-581Chapter in book (Refereed)
    Abstract [en]

    With BitTorrent-like protocols a client may download a file from a large and changing set of peers, using connections of heterogeneous and time-varying bandwidths. This flexibility is achieved by breaking the file into many small pieces, each of which may be downloaded from different peers. This paper considers an approach to peer-assisted on-demand delivery of stored media that is based on the relatively simple and flexible BitTorrent-like approach, but which is able to achieve a form of “streaming” delivery, in the sense that playback can begin well before the entire media file is received. Achieving this goal requires: (1) a piece selection strategy that effectively mediates the conflict between the goals of high piece diversity, and the in-order requirements of media file playback, and (2) an on-line rule for deciding when playback can safely commence. We present and evaluate using simulation candidate protocols including both of these components.

  • 26.
    Carlsson, Niklas
    et al.
    University of Calgary.
    Eager, Derek L.
    University of Calgary.
    Server Selection in Large-scale Video-on-Demand Systems2010In: ACM Transactions on Multimedia Computing, Communications, and Applications (TOMCCAP), ISSN 1551-6857, E-ISSN 1551-6865, ISSN 1551-6857, Vol. 6, no 1, p. 1:1-1:26Article in journal (Refereed)
    Abstract [en]

     

    Video on demand, particularly with user-generated content, is emerging as one of the most bandwidth-intensive applications on the Internet. Owing to content control and other issues, some video-on-demand systems attempt to prevent downloading and peer-to-peer content delivery. Instead, such systems rely on server replication, such as via third-party content distribution networks, to support video streaming (or pseudostreaming) to their clients. A major issue with such systems is the cost of the required server resources.

    By synchronizing the video streams for clients that make closely spaced requests for the same video from the same server, server costs (such as for retrieval of the video data from disk) can be amortized over multiple requests. A fundamental trade-off then arises, however, with respect to server selection. Network delivery cost is minimized by selecting the nearest server, while server cost is minimized by directing closely spaced requests for the same video to a common server.

    This article compares classes of server selection policies within the context of a simple system model. We conclude that: (i) server selection using dynamic system state information (rather than only proximities and average loads) can yield large improvements in performance, (ii) deferring server selection for a request as late as possible (i.e., until just before streaming is to begin) can yield additional large improvements, and (iii) within the class of policies using dynamic state information and deferred selection, policies using only “local” (rather than global) request information are able to achieve most of the potential performance gains.

     

  • 27.
    Carlsson, Niklas
    et al.
    University of Calgary, Canada.
    Eager, Derek L.
    University of Saskatchewan, Canada.
    Mahanti, Anirban
    NICTA. Sydney, Australia.
    Peer-assisted On-demand Video Streaming with Selfish Peers2009In: NETWORKING 2009: 8th International IFIP-TC 6 Networking Conference, Aachen, Germany, May 11-15, 2009. Proceedings / [ed] Luigi Fratta, Henning Schulzrinne, Yutaka Takahashi and Otto Spaniol, Springer Berlin/Heidelberg, 2009, p. 586-599Chapter in book (Refereed)
    Abstract [en]

    Systems delivering stored video content using a peer-assisted approach are able to serve large numbers of concurrent requests by utilizing upload bandwidth from their clients to assist in delivery. In systems providing download service, BitTorrent-like protocols may be used in which “tit-for-tat” policies provide incentive for clients to contribute upload bandwidth. For on-demand streaming delivery, however, in which clients begin playback well before download is complete, all prior proposed protocols rely on peers at later video play points uploading data to peers at earlier play points that do not have data to share in return. This paper considers the problem of devising peer-assisted protocols for streaming systems that, similar to download systems, provide effective “tit-for-tat” incentives for clients to contribute upload bandwidth. We propose policies that provide such incentives, while also providing short start-up delays, and delivery of (almost) all video frames by their respective playback deadlines.

  • 28.
    Carlsson, Niklas
    et al.
    University of Saskatchewan.
    Eager, Derek L.
    Vernon, Mary K.
    Multicast Protocols for Scalable On-demand Download2004In: Proc. ACM SIGMETRICS/Performance ’04, New York, NY, June 2004, ACM , 2004, p. 428-429Conference paper (Refereed)
  • 29.
    Carlsson, Niklas
    et al.
    University of Saskatchewan.
    Eager, Derek L.
    University of Saskatchewan.
    Vernon, Mary K.
    University of Wisconsin-Madison.
    Multicast Protocols for Scalable On-demand Download2006In: Performance evaluation (Print), ISSN 0166-5316, E-ISSN 1872-745X, ISSN 0166-5316, Vol. 63, no 9/10, p. 864-891Article in journal (Refereed)
    Abstract [en]

    Previous scalable protocols for downloading large, popular files from a single server include batching and cyclic multicast. With batching, clients wait to begin receiving a requested file until the beginning of its next multicast transmission, which collectively serves all of the waiting clients that have accumulated up to that point. With cyclic multicast, the file data is cyclically transmitted on a multicast channel. Clients can begin listening to the channel at an arbitrary point in time, and continue listening until all of the file data has been received.This paper first develops lower hounds on the average and maximum client delay for completely downloading a file, as functions of the average server bandwidth used to serve requests for that file, for systems with homogeneous clients. The results show that neither cyclic multicast nor batching consistently yields performance close to optimal. New hybrid download protocols are proposed that achieve within 15% of the optimal maximum delay and 20% of the optimal average delay in homogeneous systems.For heterogeneous systems in which clients have widely varying achievable reception rates, an additional design question concerns the use of high rate transmissions, which can decrease delay for clients that can receive at such rates, in addition to low rate transmissions that can be received by all clients. A new scalable download protocol for such systems is proposed, and its performance is compared to that of alternative protocols as well as to new lower bounds on maximum client delay. The new protocol achieves within 25% of the optimal maximum client delay in all scenarios considered.

  • 30.
    Carlsson, Niklas
    et al.
    University of Calgary, Canada.
    Eager, Derek
    University of Saskatchewan, Canada.
    Mahanti, Anirban
    NICTA. Sydney, Australia.
    Using Torrent Inflation to Efficiently Serve the Long Tail in Peer-assisted Content Delivery Systems2010In: NETWORKING 2010: 9th International IFIP TC 6 Networking Conference, Chennai, India, May 11-15, 2010. Proceedings / [ed] Mark Crovella, Laura Marie Feeney, Dan Rubenstein and S. V. Raghavan, Springer Berlin/Heidelberg, 2010, p. 1-14Chapter in book (Refereed)
    Abstract [en]

    A peer-assisted content delivery system uses the upload bandwidth of its clients to assist in delivery of popular content. In peer-assisted systems using a BitTorrent-like protocol, a content delivery server seeds the offered files, and active torrents form when multiple clients make closely-spaced requests for the same content. Scalability is achieved in the sense of being able to accommodate arbitrarily high request rates for individual files. Scalability with respect to the number of files, however, may be much more difficult to achieve, owing to a “long tail” of lukewarm or cold files for which the server may need to assume most or all of the delivery cost. This paper first addresses the question of how best to allocate server resources among multiple active torrents. We then propose new content delivery policies that use some of the available upload bandwidth from currently downloading clients to “inflate” torrents for files that would otherwise require substantial server bandwidth. Our performance results show that use of torrent inflation can substantially reduce download times, by more than 50% in some cases.

  • 31.
    Carlsson, Niklas
    et al.
    Linköping University, Department of Computer and Information Science, Database and information techniques. Linköping University, Faculty of Science & Engineering.
    Li, Xiaolin Andy
    University of Florida, USA.
    Singhal, Mukesh
    University of California, Merced, USA.
    Wang, Mea
    University of Calgary, Canada.
    Guest Editorial - Cloud and Big Data2014In: Journal of Communications, ISSN 1796-2021, E-ISSN 1796-2021, Vol. 9, no 4, p. i-iiiArticle in journal (Refereed)
    Abstract [en]

    In the current digital age, massive amounts of data are generated in many different ways and forms. The data may be collected from everything from personal web logs to purposefully placed sensors. Today, companies and researchers use this data for everything from targeted personalized ads based on social data to solving important scientific problems that may help future generations of word citizens. Regardless if measured in monetary profit or other measures, the value of this data has proven valuable for many purposes and has led us into the Big Data era. Due to the large volume of data, Big Data requires significant storage, processing, and bandwidth resources. To date, the Cloud provides the largest collection of disk storage, CPU power, and network bandwidth, which makes it a natural choice for housing the Big Data.

  • 32.
    Carlsson, Niklas
    et al.
    University of Saskatchewan.
    Mahanti, Anirban
    IIT Delhi, India.
    Li, Zongpeng
    University of Calgary.
    Eager, Derek L.
    University of Saskatchewan.
    Optimized Periodic Broadcast of Non-linear Media2008In: IEEE transactions on multimedia, ISSN 1520-9210, E-ISSN 1941-0077, ISSN 1520-9210, Vol. 10, no 5, p. 871-884Article in journal (Refereed)
    Abstract [en]

    Conventional video consists of a single sequence of video frames. During a client's playback period, frames are viewed sequentially from some specified starting point. The fixed frame ordering of conventional video enables efficient scheduled broadcast delivery, as well as efficient near on-demand delivery to large numbers of concurrent clients through use of periodic broadcast protocols in which the video file is segmented and transmitted on multiple channels. This paper considers the problem of devising scalable protocols for near on-demand delivery of “nonlinear” media files whose content may have a tree or graph, rather than linear, structure. Such media allows personalization of the media playback according to individual client preferences. We formulate a mathematical model for determination of the optimal periodic broadcast protocol for nonlinear media with piecewise-linear structures. Our objective function allows differing weights to be placed on the startup delays required for differing paths through the media. Studying a number of simple nonlinear structures we provide insight into the characteristics of the optimal solution. For cases in which the cost of solving the optimization model is prohibitive, we propose and evaluate an efficient approximation algorithm.

  • 33.
    Carlsson, Niklas
    et al.
    Linköping University, Department of Computer and Information Science, Database and information techniques. Linköping University, The Institute of Technology.
    Williamson, Carey
    University of Calgary, Canada.
    Hirt, Andreas
    University of Calgary, Canada.
    Jacobson, Micheal
    University of Calgary, Canada.
    Performance Modeling of Anonymity Protocols2012In: Performance evaluation (Print), ISSN 0166-5316, E-ISSN 1872-745X, Vol. 69, no 12, p. 643-661Article in journal (Refereed)
    Abstract [en]

    Anonymous network communication protocols provide privacy for Internet-based communication. In this paper, we focus on the performance and scalability of anonymityprotocols. In particular, we develop performance models for two anonymityprotocols from the prior literature (Buses and Taxis), as well as our own newly proposed protocol (Motorcycles). Using a combination of experimental implementation, simulation, and analysis, we show that: (1) the message latency of the Buses protocol is O(N2), scaling quadratically with the number of participants; (2) the message latency of the Taxis protocol is O(N), scaling linearly with the number of participants; and (3) the message latency of the Motorcycles protocol is O(log2N), scaling logarithmically with the number of participants. Motorcycles can provide scalable anonymous network communication, without compromising the strength of anonymity provided by Buses or Taxis.

  • 34.
    Dan, Gyorgy
    et al.
    KTH Royal Institute of Technology, Stockholm.
    Carlsson, Niklas
    Linköping University, Department of Computer and Information Science, Database and information techniques. Linköping University, The Institute of Technology.
    Centralized and Distributed Protocols for Tracker-based Dynamic Swarm Management2013In: IEEE/ACM Transactions on Networking, ISSN 1063-6692, E-ISSN 1558-2566, Vol. 21, no 1, p. 297-310Article in journal (Refereed)
    Abstract [en]

    With BitTorrent, efficient peer upload utilization is achieved by splitting contents into many small pieces, each of which may be downloaded from different peers within the same swarm. Unfortunately, piece and bandwidth availability may cause the file-sharing efficiency to degrade in small swarms with few participating peers. Using extensive measurements, we identified hundreds of thousands of torrents with several small swarms for which reallocating peers among swarms and/or modifying the peer behavior could significantly improve the system performance. Motivated by this observation, we propose a centralized and a distributed protocol for dynamic swarm management. The centralized protocol (CSM) manages the swarms of peers at minimal tracker overhead. The distributed protocol (DSM) manages the swarms of peers while ensuring load fairness among the trackers. Both protocols achieve their performance improvements by identifying and merging small swarms and allow load sharing for large torrents. Our evaluations are based on measurement data collected during eight days from over 700 trackers worldwide, which collectively maintain state information about 2.8 million unique torrents. We find that CSM and DSM can achieve most of the performance gains of dynamic swarm management. These gains are estimated to be up to 40% on average for small torrents.

  • 35.
    Dan, Gyorgy
    et al.
    Royal Institute of Technology, Stockholm, Sweden.
    Carlsson, Niklas
    Linköping University, Department of Computer and Information Science, Database and information techniques. Linköping University, The Institute of Technology.
    Chatzidrossos, Illias
    Royal Institute of Technology, Stockholm, Sweden.
    Efficient and Highly Available Peer Discovery: A Case for Independent Trackers and Gossiping2011In: 2011 IEEE International Conference on Peer-to-Peer Computing (P2P), IEEE , 2011, p. 290-299Conference paper (Refereed)
    Abstract [en]

    Tracker-based peer-discovery is used in most commercial peer-to-peer content distribution systems, as it provides performance benefits compared to distributed solutions, and facilitates the control and monitoring of the overlay. But a tracker is a central point of failure, and its deployment and maintenance incur costs; hence an important question is how high tracker availability can be achieved at low cost. We investigate highly available, low overhead peer discovery, using independent trackers and a simple gossip protocol. This work is a step towards understanding the trade-off between the overhead and the achievable peer connectivity in highly available distributed overlay-management systems for peer-to-peer content distribution. We propose two protocols that connect peers in different swarms efficiently with a constant, but tunable, overhead. The two protocols, Random Peer Migration (RPM) and Random Multi-Tracking (RMT), employ a small fraction of peers in a torrent to virtually increase the size of swarms. We develop analytical models of the protocols based on renewal theory, and validate the models using both extensive simulations and controlled experiments. We illustrate the potential value of the protocols using large-scale measurement data that contains hundreds of thousands of public torrents with several small swarms, with limited peer connectivity. We estimate the achievable gains to be up to 40% on average for small torrents.

  • 36.
    Dan, György
    et al.
    Royal Institute of Technology (KTH), Stockholm, Sweden.
    Carlsson, Niklas
    Linköping University, Department of Computer and Information Science, Database and information techniques. Linköping University, The Institute of Technology.
    Dynamic Content Allocation for Cloud-assisted Service of Periodic Workloads2014In: INFOCOM 2014, IEEE , 2014, p. 853-861Conference paper (Refereed)
    Abstract [en]

    Motivated by improved models for content workload prediction, in this paper we consider the problem of dynamic content allocation for a hybrid content delivery system that combines cloud-based storage with low cost dedicated servers that have limited storage and unmetered upload bandwidth. We formulate the problem of allocating contents to the dedicated storage as a finite horizon dynamic decision problem, and show that a discrete time decision problem is a good approximation for piecewise stationary workloads. We provide an exact solution to the discrete time decision problem in the form of a mixed integerlinear programming problem, propose computationally feasible approximations, and give bounds on their approximation ratios.Finally, we evaluate the algorithms using synthetic and measuredtraces from a commercial music on-demand service and give insight into their performance as a function of the workload characteristics.

  • 37. Dan, György
    et al.
    Carlsson, Niklas
    University of Calgary.
    Dynamic Swarm Management for Improved BitTorrent Performance2009In: Proc. International Workshop on Peer-toPeer Systems (IPTPS '09), Boston MA, April 2009.: (in conjunction with NSDI'09), 2009, p. 1-6Conference paper (Refereed)
  • 38. Dan, György
    et al.
    Carlsson, Niklas
    University of Calgary.
    Power-law Revisited: A Large Scale Measurement Study of P2P Content Popularity2010In: Proc. International Workshop on Peer-to-Peer Systems (IPTPS ’10), San Jose, CA, April 2010: (in conjunction with NSDI'10), 2010, p. 1-6Conference paper (Refereed)
  • 39.
    de Leng, Daniel
    et al.
    Linköping University, Department of Computer and Information Science, Artificial Intelligence and Integrated Computer Systems. Linköping University, Faculty of Science & Engineering.
    Tiger, Mattias
    Linköping University, Department of Computer and Information Science, Artificial Intelligence and Integrated Computer Systems. Linköping University, Faculty of Science & Engineering.
    Almquist, Mathias
    Linköping University, Department of Computer and Information Science. Linköping University, Faculty of Science & Engineering.
    Almquist, Viktor
    Linköping University, Department of Computer and Information Science. Linköping University, Faculty of Science & Engineering.
    Carlsson, Niklas
    Linköping University, Department of Computer and Information Science, Database and information techniques. Linköping University, Faculty of Science & Engineering.
    Second Screen Journey to the Cup: Twitter Dynamics during the Stanley Cup Playoffs2018In: Proceedings of the 2nd Network Traffic Measurement and Analysis Conference (TMA), 2018, p. 1-8Conference paper (Refereed)
    Abstract [en]

    With Twitter and other microblogging services, users can easily express their opinion and ideas in short text messages. A recent trend is that users use the real-time property of these services to share their opinions and thoughts as events unfold on TV or in the real world. In the context of TV broadcasts, Twitter (over a mobile device, for example) is referred to as a second screen. This paper presents the first characterization of the second screen usage over the playoffs of a major sports league. We present both temporal and spatial analysis of the Twitter usage during the end of the National Hockey League (NHL) regular season and the 2015 Stanley Cup playoffs. Our analysis provides insights into the usage patterns over the full 72-day period and with regards to in-game events such as goals, but also with regards to geographic biases. Quantifying these biases and the significance of specific events, we then discuss and provide insights into how the playoff dynamics may impact advertisers and third-party developers that try to provide increased personalization.

  • 40. Dvir, Amit
    et al.
    Carlsson, Niklas
    University of Calgary.
    Power-aware Recovery for Geographic Routing2009In: Proc. IEEE Wireless Communications and Networking Conference (WCNC ’09), IEEE , 2009, p. 2851-2856Conference paper (Refereed)
  • 41.
    Estévez, Alberto García
    et al.
    Universidad de Alcala de Henares, Spain .
    Carlsson, Niklas
    Linköping University, Department of Computer and Information Science, Database and information techniques. Linköping University, The Institute of Technology.
    Geo-location-aware Emulations for Performance Evaluation of Mobile Applications2014In: Wireless On-demand Network Systems and Services (WONS 2014), IEEE , 2014, p. 73-76Conference paper (Refereed)
    Abstract [en]

    This paper presents the design of a simple emulation framework for performance evaluation and testing of mobile applications. Our testbed combines production hardware and software to allow emulation of realistic and repeatable mobility scenarios, in which the mobile user can travel long distances, while being served by an application server. The framework allows (i) geo-location information, (ii) client network conditions such as bandwidth and loss rate, as well as (iii) the application workload to be emulated synchronously. To illustrate the power of the framework we also present the design, proof-of-concept implementation, and evaluation of a geo-smart scheduler for application updates in smartphones. This geo-smart scheduler reduces the average download time by using a network performance map to schedule the downloads when at places with relatively good conditions. Our trace-driven evaluation of the geo-smart scheduler, illustrates the workings of the emulation framework, and the potential of the geo-smart scheduler.

  • 42. Garg, Sanchit
    et al.
    Gupta, Trinabh
    Carlsson, Niklas
    University of Calgary.
    Mahanti, Anirban
    Evolution of an Online Social Aggregation Network: An Empirical Study2009In: Proc. ACM Internet Measurement Conference (IMC ’09), ACM , 2009, p. 315-321Conference paper (Refereed)
  • 43.
    Gill, Phillipa
    et al.
    University of Toronto.
    Arlitt, Martin
    HP Labs, Palo Alto.
    Carlsson, Niklas
    Linköping University, The Institute of Technology. Linköping University, Department of Computer and Information Science, Database and information techniques.
    Mahanti, Anirban
    NICTA.
    Williamson, Carey
    University of Calgary.
    Characterizing Organizational Use of Web-Based Services: Methodology, Challenges, Observations, and Insights2011In: ACM TRANSACTIONS ON THE WEB, ISSN 1559-1131, Vol. 5, no 4Article in journal (Refereed)
    Abstract [en]

    Todays Web provides many different functionalities, including communication, entertainment, social networking, and information retrieval. In this article, we analyze traces of HTTP activity from a large enterprise and from a large university to identify and characterize Web-based service usage. Our work provides an initial methodology for the analysis of Web-based services. While it is nontrivial to identify the classes, instances, and providers for each transaction, our results show that most of the traffic comes from a small subset of providers, which can be classified manually. Furthermore, we assess both qualitatively and quantitatively how the Web has evolved over the past decade, and discuss the implications of these changes.

  • 44.
    Gopinathan, Ajay
    et al.
    University of Calgary, Canada.
    Carlsson, Niklas
    Linköping University, Department of Computer and Information Science, Database and information techniques. Linköping University, Faculty of Science & Engineering.
    Li, Zongpeng
    University of Calgary, Canada.
    Wu, Chuan
    University of Hong Kong, Peoples R China.
    Revenue-maximizing and Truthful Online Auctions for Dynamic Spectrum Access2016In: 2016 12TH ANNUAL CONFERENCE ON WIRELESS ON-DEMAND NETWORK SYSTEMS AND SERVICES (WONS), IEEE , 2016, p. 1-8Conference paper (Refereed)
    Abstract [en]

    Secondary spectrum auctions have been suggested as a strategically robust mechanism for distributing idle spectrum to competing secondary users. However, previous work on such auction design have assumed a static auction setting, thus failing to fully exploit the inherently time-varying nature of spectrum demand and utilization. In this paper, we address this issue from the perspective of the primary user who wishes to maximize the auction revenue. We present an online auction framework that dynamically accepts bids and allocates spectrum. We prove rigorously that our online auction framework is truthful in the multiple dimensions of bid values, as well as bid timing parameters. To protect against unbounded loss of revenue due to latter bids, we introduce controlled preemption into our mechanism. We prove that preemption, coupled with the technique of inflating bids artificially, leads to an online auction that guarantees a 1/5-fraction of the optimal revenue as obtained by an offline adversary. Since the previous guarantee holds only for the optimal channel allocation, we further provide a greedy channel allocation scheme which provides scalability. We prove that the greedy scheme also obtains a constant competitive revenue guarantee, where the constant depends on the parameter of the conflict graph.

  • 45. Gupta, Trinabh
    et al.
    Garg, Sanchit
    Mahanti, Anirban
    Carlsson, Niklas
    University of Calgary.
    Arlitt, Martin
    Characterization of FriendFeed – A Web-based Social Aggregation Service2009In: Proc. AAAI International Conference on Weblogs and Social Media (ICWSM ’09), AAAI Press , 2009, p. 218-221Conference paper (Refereed)
  • 46.
    Gustafsson, Josef
    et al.
    Linköping University.
    Hiran, Rahul
    Linköping University, Department of Computer and Information Science. Linköping University, Faculty of Science & Engineering.
    Krishnamoorthi, Vengatanathan
    Linköping University, Department of Computer and Information Science, Database and information techniques. Linköping University, Faculty of Science & Engineering.
    Carlsson, Niklas
    Linköping University, Department of Computer and Information Science, Database and information techniques. Linköping University, Faculty of Science & Engineering.
    The hidden mailman and his mailbag: Routing path analysis from a European perspective2017In: 2017 IEEE International Conference on Communications (ICC) / [ed] Debbah M.,Gesbert D.,Mellouk A., IEEE, 2017, p. 1-7Conference paper (Refereed)
    Abstract [en]

    The postal system is often used as an analogy when describing Internet routing. However, in addition to similarities, there are some significant differences. First, and most importantly, the Autonomous Systems (ASes) that operate the routers along the end-to-end path of a packet can often inspect and manipulate the packet and its content. Second, due to lack of secure routing mechanisms, packet paths can be diverted through additional non-trusted ASes. Although we often know the first network we connect through and the service that we access, we seldom know the networks that forward our packets. We can think of these networks as hidden mailmen. To better understand these networks and their potential access to information, we characterize the ASes along the paths of typical Internet packets between European example clients and the most popular web domains. We also identify ASes and countries with higher path coverage and investigate if there are differences in the HTTPS usage among paths that may take additional detours. Our results highlight the role played by North American (typically US-based) ASes and glean insights into how vulnerable the detoured traffic is to man-in-the-middle attacks compared to regular traffic.

  • 47.
    Gustafsson, Josef
    et al.
    Linköping University.
    Overier, Gustaf
    Linköping University.
    Arlitt, Martin
    University of Calgary, Calgary, Canada.
    Carlsson, Niklas
    Linköping University, Department of Computer and Information Science, Database and information techniques. Linköping University, Faculty of Science & Engineering.
    A first look at the CT landscape: Certificate transparency logs in practice2017In: Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) / [ed] Uhlig S.,Amann J.,Kaafar M.A., 2017, Vol. 10176, p. 87-99Conference paper (Refereed)
    Abstract [en]

    Many of today’s web-based services rely heavily on secure end-to-end connections. The “trust” that these services require builds upon TLS/SSL. Unfortunately, TLS/SSL is highly vulnerable to compromised Certificate Authorities (CAs) and the certificates they generate. Certificate Transparency (CT) provides a way to monitor and audit certificates and certificate chains, to help improve the overall network security. Using an open standard, anybody can setup CT logs, monitors, and auditors. CT is already used by Google’s Chrome browser for validation of Extended Validation (EV) certificates, Mozilla is drafting their own CT policies to be enforced, and public CT logs have proven valuable in identifying rogue certificates. In this paper we present the first large-scale characterization of the CT landscape. Our characterization uses both active and passive measurements and highlights similarities and differences in public CT logs, their usage, and the certificates they include. We also provide insights into how the certificates in these logs relate to the certificates and keys observed in regular web traffic.

  • 48.
    Hashemian, Raoufehsadat
    et al.
    University of Calgary Calgary, Canada.
    Carlsson, Niklas
    Linköping University, Department of Computer and Information Science, Database and information techniques. Linköping University, Faculty of Science & Engineering.
    Krishnamurthy, Diwakar
    University of Calgary, Calgary, Canada.
    Arlitt, Martin
    University of Calgary Calgary, Canada.
    IRIS: Iterative and Intelligent Experiment Selection2017In: ICPE ’17 Proceedings of the 8th ACM/SPEC on International Conference on Performance Engineering, ACM , 2017, p. 143-154Conference paper (Refereed)
    Abstract [en]

    Benchmarking is a widely-used technique to quantify the performance of software systems. However, the design and implementation of a benchmarking study can face several challenges. In particular, the time required to perform a benchmarking study can quickly spiral out of control, owing to the number of distinct variables to systematically examine. In this paper, we propose IRIS, an IteRative and Intelligent Experiment Selection methodology, to maximize the information gain while minimizing the duration of the benchmarking process. IRIS selects the region to place the next experiment point based on the variability of both dependent, i.e., response, and independent variables in that region. It aims to identify a performance function that minimizes the response variable prediction error for a constant and limited experimentation budget. We evaluate IRIS for a wide selection of experimental, simulated and synthetic systems with one, two and three independent variables. Considering a limited experimentation budget, the results show IRIS is able to reduce the performance function prediction error up to 4:3 times compared to equal distance experiment point selection. Moreover, we show that the error reduction can further improve through system-specific parameter tuning. Analysis of the error distributions obtained with IRIS reveals that the technique is particularly effective in regions where the response variable is sensitive to changes in the independent variables

  • 49.
    Hashemian, Raoufehsadat
    et al.
    University of Calgary, Canada.
    Krishnamurthy, Diwakar
    University of Calgary, Canada.
    Arlitt, Martin
    HP Labs, Palo alto, CA, USA.
    Carlsson, Niklas
    Linköping University, Department of Computer and Information Science, Database and information techniques. Linköping University, The Institute of Technology.
    Characterizing the Scalability of a Web Application on a Multi-core Server2014In: Concurrency and Computation: Practice and Experience, ISSN 1532-0626, Vol. 26, no 12, p. 2027-2052Article in journal (Refereed)
    Abstract [en]

    The advent of multi‒core technology motivates new studies to understand how efficiently Web servers utilize such hardware. This paper presents a detailed performance study of a Web server application deployed on a modern eight‒core server. Our study shows that default Web server configurations result in poor scalability with increasing core counts. We study two different types of workloads, namely, a workload with intense TCP/IP related OS activity and the SPECweb2009 Support workload with more application‒level processing. We observe that the scaling behaviour is markedly different for these workloads, mainly because of the difference in the performance of static and dynamic requests. While static requests perform poorly when moving from using one socket to both sockets in the system, the converse is true for dynamic requests. We show that, contrary to what was suggested by previous work, Web server scalability improvement policies need to be adapted based on the type of workload experienced by the server. The results of our experiments reveal that with workload‒specific Web server configuration strategies, a multi‒core server can be utilized up to 80% while still serving requests without significant queuing delays; utilizations beyond 90% are also possible, while still serving requests with ‘acceptable’ response times.

  • 50.
    Hashemian, Raoufehsadat
    et al.
    University of Calgary, Alberta, Canada .
    Krishnamurthy, Diwakar
    University of Calgary, Alberta, Canada .
    Arlitt, Martin
    HP Labs, Palo Alto, California, USA .
    Carlsson, Niklas
    Linköping University, Department of Computer and Information Science, Database and information techniques. Linköping University, The Institute of Technology.
    Improving the Scalability of a Multi-core Web Server2013In: ICPE '13 Proceedings of the 4th ACM/SPEC International Conference on Performance Engineering, Association for Computing Machinery (ACM), 2013, p. 161-172Conference paper (Refereed)
    Abstract [en]

    Improving the performance and scalability of Web servers enhances user experiences and reduces the costs of providing Web-based services. The advent of Multi-core technology motivates new studies to understand how efficiently Web servers utilize such hardware. This paper presents a detailed performance study of a Web server application deployed on a modern 2 socket, 4-cores per socket server. Our study show that default, "out-of-the-box" Web server configurations can cause the system to scale poorly with increasing core counts. We study two different types of workloads, namely a workload that imposes intense TCP/IP related OS activity and the SPECweb2009 Support workload, which incurs more application-level processing. We observe that the scaling behaviour is markedly different for these two types of workloads, mainly due to the difference in the performance characteristics of static and dynamic requests. The results of our experiments reveal that with workload-specific Web server configuration strategies a modern Multi-core server can be utilized up to 80% while still serving requests without significant queuing delays; utilizations beyond 90% are also possible, while still serving requests with acceptable response times.

12 1 - 50 of 93
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • oxford
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf