liu.seSearch for publications in DiVA
Change search
Refine search result
12 1 - 50 of 80
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • oxford
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function.
  • 1. Arlitt, Martin
    et al.
    Carlsson, Niklas
    University of Calgary.
    Leveraging Organizational Etiquette to Improve Internet Security2010In: Proc. IEEE International Conference on Computer Communication Networks (ICCCN ’10), IEEE , 2010, p. 1-6Conference paper (Refereed)
  • 2.
    Arlitt, Martin
    et al.
    HP Labs.
    Carlsson, Niklas
    Linköping University, The Institute of Technology. Linköping University, Department of Computer and Information Science, Database and information techniques.
    Gill, Phillipa
    University of Toronto.
    Mahanti, Aniket
    University of Calgary.
    Williamson, Carey
    University of Calgary.
    Characterizing Intelligence Gathering and Control on an Edge Network2011In: ACM Transactions on Internet Technology, ISSN 1533-5399, E-ISSN 1557-6051, Vol. 11, no 1Article in journal (Refereed)
    Abstract [en]

    here is a continuous struggle for control of resources at every organization that is connected to the Internet. The local organization wishes to use its resources to achieve strategic goals. Some external entities seek direct control of these resources, for purposes such as spamming or launching denial-of-service attacks. Other external entities seek indirect control of assets (e. g., users, finances), but provide services in exchange for them. less thanbrgreater than less thanbrgreater thanUsing a year-long trace from an edge network, we examine what various external organizations know about one organization. We compare the types of information exposed by or to external organizations using either active (reconnaissance) or passive (surveillance) techniques. We also explore the direct and indirect control external entities have on local IT resources.

  • 3.
    Arlitt, Martin
    et al.
    HP Labs.
    Carlsson, NiklasLinköping University, Department of Computer and Information Science, Database and information techniques.Hedge, NidhiTechnicolor.Wierman, AdamCalifornia Institute of Technology.
    ACM SIGMETRICS Performance Evaluation ReviewVolume 40 Issue 3, December 2012.: Special issue on the 2012 GreenMetrics workshop2013Conference proceedings (editor) (Refereed)
  • 4. Arlitt, Martin
    et al.
    Carlsson, NiklasRolia, Jerry
    GreenMetrics '09 Workshop Seattle, WA, June 2009.: in conjunction with ACM SIGMETRICS/Performance '09  (Proceedings appeared in ACM Performance Evaluation Review (PER), Special Issue on the 2009 GreenMetrics Workshop, 37, 4 (Mar. 2010).)2009Conference proceedings (editor) (Other academic)
  • 5. Arlitt, Martin
    et al.
    Carlsson, NiklasRolia, Jerry
    GreenMetrics '10 Workshop: In conjunction with ACM SIGMETRICS, New York, NY, June 2010. (Proceedings appeared in ACM Performance Evaluation Review (PER), Special Issue on the 2010 GreenMetrics Workshop, 38, 3 (Dec. 2010).) 2010Conference proceedings (editor) (Other academic)
  • 6. Arlitt, Martin
    et al.
    Carlsson, NiklasRolia, Jerry
    Proceedings of the Third GreenMetrics '11 Workshop, in conjunction with (and sponsored by) ACM SIGMETRICS.: ACM Performance Evaluation Review (PER), Special Issue on the 2011 GreenMetrics Workshop.  Volume 39, Issue 3, December 2011.2011Conference proceedings (editor) (Refereed)
  • 7.
    Arlitt, Martin
    et al.
    HP Labs; University of Calgary, Canada.
    Carlsson, Niklas
    Linköping University, Department of Computer and Information Science, Database and information techniques. Linköping University, The Institute of Technology.
    Williamson, Carey
    University of Calgary, Canada.
    Rolia, Jerry
    HP Labs.
    Passive Crowd-based Monitoring of World Wide Web Infrastructure and its Performance2012In: Proc. IEEE International Conference on Communications (ICC 2012), IEEE , 2012, p. 2689-2694Conference paper (Refereed)
    Abstract [en]

    The World Wide Web and the services it provides are continually evolving. Even for a single time instant, it is a complex task to methodologically determine the infrastructure over which these services are provided and the corresponding effect on user perceived performance. For such tasks, researchers typically rely on active measurements or large numbers of volunteer users. In this paper, we consider an alternative approach, which we refer to as passive crowd-based monitoring. More specifically, we use passively collected proxy logs from a global enterprise to observe differences in the quality of service (QoS) experienced by users on different continents. We also show how this technique can measure properties of the underlying infrastructures of different Web content providers. While some of these properties have been observed using active measurements, we are the first to show that many of these properties (such as location of servers) can be obtained using passive measurements of actual user activity. Passive crowd-based monitoring has the advantages that it does not add any overhead on Web infrastructure, it does not require any specific software on the clients, but still captures the performance and infrastructure observed by actual Web usage.

  • 8.
    Borghol, Youmna
    et al.
    NICTA, Australia; University of New South Wales, Sydney, NSW, Australia.
    Ardon, Sebastien
    NICTA, Alexandria, NSW, Australia .
    Carlsson, Niklas
    Linköping University, Department of Computer and Information Science, Database and information techniques. Linköping University, The Institute of Technology.
    Eager, Derek
    University of Saskatchewan, Canada.
    Mahanti, Anirban
    NICTA, Alexandria, NSW, Australia .
    The Untold Story of the Clones: Content-agnostic Factors that Impact YouTube Video Popularity2012In: Proc. ACM SIGKDD Conference on Knowledge Discovery and Data Mining (KDD) 2012, Association for Computing Machinery (ACM), 2012, p. 1186-1194Conference paper (Refereed)
    Abstract [en]

    Video dissemination through sites such as YouTube can have widespread impacts on opinions, thoughts, and cultures. Not all videos will reach the same popularity and have the same impact. Popularity differences arise not only because of differences in video content, but also because of other "content-agnostic" factors. The latter factors are of considerable interest but it has been difficult to accurately study them. For example, videos uploaded by users with large social networks may tend to be more popular because they tend to have more interesting content, not because social network size has a substantial direct impact on popularity.

    In this paper, we develop and apply a methodology that is able to accurately assess, both qualitatively and quantitatively, the impacts of various content-agnostic factors on video popularity. When controlling for video content, we observe a strong linear "rich-get-richer" behavior, with the total number of previous views as the most important factor except for very young videos. The second most important factor is found to be video age. We analyze a number of phenomena that may contribute to rich-get-richer, including the first-mover advantage, and search bias towards popular videos. For young videos we find that factors other than the total number of previous views, such as uploader characteristics and number of keywords, become relatively more important. Our findings also confirm that inaccurate conclusions can be reached when not controlling for content.

  • 9.
    Borghol, Youmna
    et al.
    NICTA, Alexandria, Australia.
    Ardon, Sebastien
    NICTA, Alexandria, Australia.
    Carlsson, Niklas
    University of Calgary, Canada.
    Mahanti, Anirban
    NICTA, Alexandria, Australia.
    Toward Efficient On-demand Streaming with BitTorrent2010In: NETWORKING 2010: 9th International IFIP TC 6 Networking Conference, Chennai, India, May 11-15, 2010. Proceedings / [ed] Mark Crovella, Laura Marie Feeney, Dan Rubenstein, S. V. Raghavan, Springer , 2010, p. 53-66Chapter in book (Refereed)
    Abstract [en]

    This paper considers the problem of adapting the BitTorrent protocol for on-demand streaming. BitTorrent is a popular peer-to-peer file sharing protocol that efficiently accommodates a large number of requests for file downloads. Two components of the protocol, namely the Rarest-First piece selection policy and the Tit-for-Tat algorithm for peer selection, are acknowledged to contribute toward the protocol's efficiency with respect to time to download files and its resilience to freeriders. Rarest-First piece selection, however, does not augur well for on-demand streaming. In this paper, we present a new adaptive Window-based piece selection policy that achieves a balance between the system scalability provided by the Rarest-First algorithm and the necessity of In-Order pieces for seamless media playback. We also show that this simple modification to the piece selection policy allows the system to be efficient with respect to utilization of available upload capacity of participating peers, and does not break the Tit-for-Tat incentive scheme which provides resilience to freeriders.

  • 10.
    Borghol, Youmna
    et al.
    NICTA, Australia.
    Mitra, Siddharth
    Indian Institute Technology Delhi.
    Ardon, Sebastien
    NICTA, Australia.
    Carlsson, Niklas
    Linköping University, The Institute of Technology. Linköping University, Department of Computer and Information Science, Database and information techniques.
    Eager, Derek
    University of Saskatchewan.
    Mahanti, Anirban
    NICTA, Australia.
    Characterizing and modelling popularity of user-generated videos2011In: Performance evaluation (Print), ISSN 0166-5316, E-ISSN 1872-745X, Vol. 68, no 11, p. 1037-1055Article in journal (Refereed)
    Abstract [en]

    This paper develops a framework for studying the popularity dynamics of user-generated videos, presents a characterization of the popularity dynamics, and proposes a model that captures the key properties of these dynamics. We illustrate the biases that may be introduced in the analysis for some choices of the sampling technique used for collecting data; however, sampling from recently-uploaded videos provides a dataset that is seemingly unbiased. Using a dataset that tracks the views to a sample of recently-uploaded YouTube videos over the first eight months of their lifetime, we study the popularity dynamics. We find that the relative popularities of the videos within our dataset are highly non-stationary, owing primarily to large differences in the required time since upload until peak popularity is finally achieved, and secondly to popularity oscillation. We propose a model that can accurately capture the popularity dynamics of collections of recently-uploaded videos as they age, including key measures such as hot set churn statistics, and the evolution of the viewing rate and total views distributions over time.

  • 11.
    Carlsson, Niklas
    Linköping University, Department of Computer and Information Science, Database and information techniques. Linköping University, The Institute of Technology.
    Broadening the Audience: Popularity Dynamics and Scalable Content Delivery2012In: Advances in secure and networked information systems: the ADIT perspective ; Festschrift in honor of professor Nahid Shahmehri / [ed] Patrick Lambrix, Linköping: Linköping University Electronic Press, 2012, p. 139-144Chapter in book (Other academic)
    Abstract [en]

    The Internet is playing an increasingly important role in today’s society and people are beginning to expect instantaneous access to information and content wherever they are. As content delivery is consuming a majority of the Internet bandwidth and its share of bandwidth is increasing by the hour, we need scalable and efficient techniques that can support these user demands and efficiently deliver the content to the users. When designing such techniques it is important to note that not all content is the same or will reach the same popularity. Scalable techniques must handle an increasingly diverse catalogue of contents, both with regards to diversity of content (as service are becoming increasingly personalized, for example) and with regards to their individual popularity. The importance of understanding content popularity dynamics is further motivated by popular contents widespread impact on opinions, thoughts, and cultures. This article will briefly discuss some of our recent work on capturing content popularity dynamics and designing scalable content delivery techniques

  • 12.
    Carlsson, Niklas
    Linköping University, Department of Computer and Information Science, Database and information techniques. Linköping University, Faculty of Science & Engineering.
    Optimized eeeBond: Energy Efficiency with non-Proportional Router Network Interfaces2016In: PROCEEDINGS OF THE 2016 ACM/SPEC INTERNATIONAL CONFERENCE ON PERFORMANCE ENGINEERING (ICPE'16), ACM Digital Library, 2016, p. 215-223Conference paper (Refereed)
    Abstract [en]

    The recent Energy Efficient Ethernet (EEE) standard and the eBond protocol provide two orthogonal approaches that allow significant energy savings on routers. In this paper we present the modeling and performance evaluation of these two protocols and a hybrid protocol. We first present eeeBond, pronounced ``triple-e bond'', which combines the eBond capability to switch between multiple redundant interfaces with EEE's active/idle toggling capability implemented in each interface. Second, we present an analytic model of the protocol performance, and derive closed-form expressions for the optimized parameter settings of both eBond and eeeBond. Third, we present a performance evaluation that characterizes the relative performance gains possible with the optimized protocols, as well as a trace-based evaluation that validates the insights from the analytic model. Our results show that there are significant advantages to combine eBond and EEE. The eBond capability provides good savings when interfaces only offer small energy savings when in short-term sleep states, and the EEE capability is important as short-term sleep savings improve.

  • 13.
    Carlsson, Niklas
    et al.
    Linköping University, The Institute of Technology. Linköping University, Department of Computer and Information Science, Database and information techniques.
    Arlitt, Martin
    Towards More Effective Utilization of Computer Systems2011In: Proc. ACM/SPEC International Conference on Performance Engineering (ICPE ’10), Karlsruhe, Germany, March 2011., ACM , 2011, p. 235-246Conference paper (Refereed)
  • 14.
    Carlsson, Niklas
    et al.
    Linköping University, Department of Computer and Information Science, Database and information techniques. Linköping University, The Institute of Technology.
    Dan, Gyorgy
    KTH Royal Institute of Technology, Stockholm.
    Eager, Derek
    University of Saskatchewan, Canada.
    Mahanti, Anirban
    NICTA, Sydney, Australia,.
    Tradeoffs in Cloud and Peer-assisted Content Delivery Systems2012In: Peer-to-Peer Computing (P2P), 2012: , IEEE , 2012, p. 249-260Conference paper (Refereed)
    Abstract [en]

    With the proliferation of cloud services, cloud-based systems can become a cost-effective means of on-line content delivery. In order to make best use of the available cloud bandwidth and storage resources, content distributors need to have a good understanding of the tradeoffs between various system design choices. In this work we consider a peer-assisted content delivery system that aims to provide guaranteed average download rate to its customers. We show that bandwidth demand peaks for contents with moderate popularity, and identify these contents as candidates for cloud-based service. We then consider dynamic content bundling and cross-swarm seeding, which were recently proposed to improve download performance, and evaluate their impact on the optimal choice of cloud service use. We find that much of the benefits from peer seeding can be achieved with careful torrent inflation, and that hybrid policies that combine bundling and peer seeding often reduce the delivery costs by 20% relative to only using seeding. Furthermore, all these peer-assisted policies reduce the number of files that would need to be pushed to the cloud. Finally, we show that careful system design is needed if locality is an important criterion when choosing cloud-based service provisioning.

  • 15.
    Carlsson, Niklas
    et al.
    Linköping University, Department of Computer and Information Science, Database and information techniques. Linköping University, The Institute of Technology.
    Dan, György
    KTH Royal Institute of Technology, Sweden.
    Arlitt, Martin
    NICTA, Australia.
    Mahanti, Anirban
    HP Labs, USA.
    A Longitudinal Characterization of Local and Global BitTorrent Workload Dynamics2012In: Passive and Active Measurement: 13th International Conference, PAM 2012, Vienna, Austria, March 12-14th, 2012. Proceedings / [ed] Nina Taft; Fabio Ricciato, Springer Berlin/Heidelberg, 2012, p. 252-262Conference paper (Refereed)
    Abstract [en]

    This book constitutes the refereed proceedings of the 13th International Conference on Passive and Active Measurement, PAM 2012, held in Vienna, Austria, in March 2012. <br>The 25 revised full papers presented were carefully reviewed and selected from 83 submissions. The papers were arranged into eight sessions traffic evolution and analysis, large scale monitoring, evaluation methodology, malicious behavior, new measurement initiatives, reassessing tools and methods, perspectives on internet structure and services, and application protocols.

  • 16.
    Carlsson, Niklas
    et al.
    University of Calgary.
    Eager, Derek
    Content Delivery using Replicated Digital Fountains2010In: Proc. IEEE/ACM International Symposium on Modeling, Analysis and Simulation of Computer and Telecommunication Systems (MASCOTS ’10), IEEE , 2010, p. 338-348Conference paper (Refereed)
  • 17.
    Carlsson, Niklas
    et al.
    Linköping University, Department of Computer and Information Science, Database and information techniques. Linköping University, The Institute of Technology.
    Eager, Derek
    University of Saskatchewan, Canada.
    Gopinathan, Ajay
    Linköping University, Department of Computer and Information Science, Database and information techniques. Linköping University, The Institute of Technology.
    Li, Zongpeng
    University of Calgary, Canada.
    Caching and optimized request routing in cloud-based content delivery systems2014In: Performance evaluation (Print), ISSN 0166-5316, E-ISSN 1872-745X, Vol. 79, p. 38-55Article in journal (Refereed)
    Abstract [en]

    Geographically distributed cloud platforms enable an attractive approach to large-scale content delivery. Storage at various sites can be dynamically acquired from (and released back to) the cloud provider so as to support content caching, according to the current demands for the content from the different geographic regions.  When storage is sufficiently expensive that not all content should be cached at all sites, two issues must be addressed: how should requests for content be routed to the cloud provider sites, and what policy should be used for caching content using the elastic storage resources obtained from the cloud provider.  Existing approaches are typically designed for non-elastic storage and little is known about the optimal policies when minimizing the delivery costs for distributed elastic storage.

    In this paper, we propose an approach in which elastic storage resources are exploited using a simple dynamic caching policy, while request routing is updated periodically according to the solution of an optimization model.  Use of pull-based dynamic caching, rather than push-based placement, provides robustness to unpredicted changes in request rates.  We show that this robustness is provided at low cost \textendash{} even with fixed request rates, use of the dynamic caching policy typically yields content delivery cost within 10\% of that with the optimal static placement.  We compare request routing according to our optimization model to simpler baseline routing policies, and find that the baseline policies can yield greatly increased delivery cost relative to optimized routing.  Finally, we present a lower-cost approximate solution algorithm for our routing optimization problem that yields content delivery cost within 2.5\% of the optimal solution.

  • 18.
    Carlsson, Niklas
    et al.
    University of Saskatchewan, Canada .
    Eager, Derek L.
    University of Saskatchewan, Canada .
    Modeling Priority-based Incentive Policies for Peer-assisted Content Delivery Systems2008In: NETWORKING 2008 Ad Hoc and Sensor Networks, Wireless Networks, Next Generation Internet: 7th International IFIP-TC6 Networking Conference Singapore, May 5-9, 2008, Proceedings / [ed] Amitabha Das, Hung Keng Pung, Francis BuSung Lee and Lawrence WaiChoong Wong, Springer Berlin/Heidelberg, 2008, p. 421-432Chapter in book (Refereed)
    Abstract [en]

    Content delivery providers can improve their service scalability and offload their servers by making use of content transfers among their clients. To provide peers with incentive to transfer data to other peers, protocols such as BitTorrent typically employ a tit-for-tat policy in which peers give upload preference to peers that provide the highest upload rate to them. However, the tit-for-tat policy does not provide any incentive for a peer to stay in the system beyond completion of its download.

    This paper presents a simple fixed-point analytic model of a priority-based incentive mechanism which provides peers with strong incentive to contribute upload bandwidth beyond their own download completion. Priority is obtained based on a peer's prior contribution to the system. Using a two-class model, we show that priority-based policies can significantly improve average download times, and that there exists a significant region of the parameter space in which both high-priority and low-priority peers experience improved performance compared to with the pure tit-for-tat approach. Our results are supported using event-based simulations.

  • 19.
    Carlsson, Niklas
    et al.
    University of Saskatchewan.
    Eager, Derek L.
    University of Saskatchewan.
    Non-Euclidian Geographic Routing in Wireless Networks2007In: Ad hoc networks, ISSN 1570-8705, E-ISSN 1570-8713, ISSN 1570-8705, Vol. 5, no 7, p. 1173-1193Article in journal (Refereed)
    Abstract [en]

    Greedy geographic routing is attractive for large multi-hop wireless networks because of its simple and distributed operation. However, it may easily result in dead ends or hotspots when routing in a network with obstacles (regions without sufficient connectivity to forward messages). In this paper we propose a distributed routing algorithm that combines greedy geographic routing with two non-Euclidean distance metrics, chosen so as to provide load balanced routing around obstacles and hotspots. The first metric, Local Shortest Path, is used to achieve high probability of progress, while the second metric, Weighted Distance Gain, is used to select a desirable node among those that provide progress. The proposed Load Balanced Local Shortest Path (LBLSP) routing algorithm provides loop freedom, guarantees delivery when a path exists, is able to efficiently route around obstacles, and provides good load balancing.

  • 20.
    Carlsson, Niklas
    et al.
    University of Saskatchewan, Canada.
    Eager, Derek L.
    University of Saskatchewan, Canada.
    Peer-assisted On-demand Streaming of Stored Media using BitTorrent-like Protocols2007In: NETWORKING 2007. Ad Hoc and Sensor Networks, Wireless Networks, Next Generation Internet: 6th International IFIP-TC6 Networking Conference, Atlanta, GA, USA, May 14-18, 2007. Proceedings / [ed] Ian F. Akyildiz, Raghupathy Sivakumar, Eylem Ekici, Jaudelice Cavalcantede Oliveira and Janise McNair, Springer Berlin/Heidelberg, 2007, p. 570-581Chapter in book (Refereed)
    Abstract [en]

    With BitTorrent-like protocols a client may download a file from a large and changing set of peers, using connections of heterogeneous and time-varying bandwidths. This flexibility is achieved by breaking the file into many small pieces, each of which may be downloaded from different peers. This paper considers an approach to peer-assisted on-demand delivery of stored media that is based on the relatively simple and flexible BitTorrent-like approach, but which is able to achieve a form of “streaming” delivery, in the sense that playback can begin well before the entire media file is received. Achieving this goal requires: (1) a piece selection strategy that effectively mediates the conflict between the goals of high piece diversity, and the in-order requirements of media file playback, and (2) an on-line rule for deciding when playback can safely commence. We present and evaluate using simulation candidate protocols including both of these components.

  • 21.
    Carlsson, Niklas
    et al.
    University of Calgary.
    Eager, Derek L.
    University of Calgary.
    Server Selection in Large-scale Video-on-Demand Systems2010In: ACM Transactions on Multimedia Computing, Communications, and Applications (TOMCCAP), ISSN 1551-6857, E-ISSN 1551-6865, ISSN 1551-6857, Vol. 6, no 1, p. 1:1-1:26Article in journal (Refereed)
    Abstract [en]

     

    Video on demand, particularly with user-generated content, is emerging as one of the most bandwidth-intensive applications on the Internet. Owing to content control and other issues, some video-on-demand systems attempt to prevent downloading and peer-to-peer content delivery. Instead, such systems rely on server replication, such as via third-party content distribution networks, to support video streaming (or pseudostreaming) to their clients. A major issue with such systems is the cost of the required server resources.

    By synchronizing the video streams for clients that make closely spaced requests for the same video from the same server, server costs (such as for retrieval of the video data from disk) can be amortized over multiple requests. A fundamental trade-off then arises, however, with respect to server selection. Network delivery cost is minimized by selecting the nearest server, while server cost is minimized by directing closely spaced requests for the same video to a common server.

    This article compares classes of server selection policies within the context of a simple system model. We conclude that: (i) server selection using dynamic system state information (rather than only proximities and average loads) can yield large improvements in performance, (ii) deferring server selection for a request as late as possible (i.e., until just before streaming is to begin) can yield additional large improvements, and (iii) within the class of policies using dynamic state information and deferred selection, policies using only “local” (rather than global) request information are able to achieve most of the potential performance gains.

     

  • 22.
    Carlsson, Niklas
    et al.
    University of Calgary, Canada.
    Eager, Derek L.
    University of Saskatchewan, Canada.
    Mahanti, Anirban
    NICTA. Sydney, Australia.
    Peer-assisted On-demand Video Streaming with Selfish Peers2009In: NETWORKING 2009: 8th International IFIP-TC 6 Networking Conference, Aachen, Germany, May 11-15, 2009. Proceedings / [ed] Luigi Fratta, Henning Schulzrinne, Yutaka Takahashi and Otto Spaniol, Springer Berlin/Heidelberg, 2009, p. 586-599Chapter in book (Refereed)
    Abstract [en]

    Systems delivering stored video content using a peer-assisted approach are able to serve large numbers of concurrent requests by utilizing upload bandwidth from their clients to assist in delivery. In systems providing download service, BitTorrent-like protocols may be used in which “tit-for-tat” policies provide incentive for clients to contribute upload bandwidth. For on-demand streaming delivery, however, in which clients begin playback well before download is complete, all prior proposed protocols rely on peers at later video play points uploading data to peers at earlier play points that do not have data to share in return. This paper considers the problem of devising peer-assisted protocols for streaming systems that, similar to download systems, provide effective “tit-for-tat” incentives for clients to contribute upload bandwidth. We propose policies that provide such incentives, while also providing short start-up delays, and delivery of (almost) all video frames by their respective playback deadlines.

  • 23.
    Carlsson, Niklas
    et al.
    University of Saskatchewan.
    Eager, Derek L.
    Vernon, Mary K.
    Multicast Protocols for Scalable On-demand Download2004In: Proc. ACM SIGMETRICS/Performance ’04, New York, NY, June 2004, ACM , 2004, p. 428-429Conference paper (Refereed)
  • 24.
    Carlsson, Niklas
    et al.
    University of Saskatchewan.
    Eager, Derek L.
    University of Saskatchewan.
    Vernon, Mary K.
    University of Wisconsin-Madison.
    Multicast Protocols for Scalable On-demand Download2006In: Performance evaluation (Print), ISSN 0166-5316, E-ISSN 1872-745X, ISSN 0166-5316, Vol. 63, no 9/10, p. 864-891Article in journal (Refereed)
    Abstract [en]

    Previous scalable protocols for downloading large, popular files from a single server include batching and cyclic multicast. With batching, clients wait to begin receiving a requested file until the beginning of its next multicast transmission, which collectively serves all of the waiting clients that have accumulated up to that point. With cyclic multicast, the file data is cyclically transmitted on a multicast channel. Clients can begin listening to the channel at an arbitrary point in time, and continue listening until all of the file data has been received.This paper first develops lower hounds on the average and maximum client delay for completely downloading a file, as functions of the average server bandwidth used to serve requests for that file, for systems with homogeneous clients. The results show that neither cyclic multicast nor batching consistently yields performance close to optimal. New hybrid download protocols are proposed that achieve within 15% of the optimal maximum delay and 20% of the optimal average delay in homogeneous systems.For heterogeneous systems in which clients have widely varying achievable reception rates, an additional design question concerns the use of high rate transmissions, which can decrease delay for clients that can receive at such rates, in addition to low rate transmissions that can be received by all clients. A new scalable download protocol for such systems is proposed, and its performance is compared to that of alternative protocols as well as to new lower bounds on maximum client delay. The new protocol achieves within 25% of the optimal maximum client delay in all scenarios considered.

  • 25.
    Carlsson, Niklas
    et al.
    University of Calgary, Canada.
    Eager, Derek
    University of Saskatchewan, Canada.
    Mahanti, Anirban
    NICTA. Sydney, Australia.
    Using Torrent Inflation to Efficiently Serve the Long Tail in Peer-assisted Content Delivery Systems2010In: NETWORKING 2010: 9th International IFIP TC 6 Networking Conference, Chennai, India, May 11-15, 2010. Proceedings / [ed] Mark Crovella, Laura Marie Feeney, Dan Rubenstein and S. V. Raghavan, Springer Berlin/Heidelberg, 2010, p. 1-14Chapter in book (Refereed)
    Abstract [en]

    A peer-assisted content delivery system uses the upload bandwidth of its clients to assist in delivery of popular content. In peer-assisted systems using a BitTorrent-like protocol, a content delivery server seeds the offered files, and active torrents form when multiple clients make closely-spaced requests for the same content. Scalability is achieved in the sense of being able to accommodate arbitrarily high request rates for individual files. Scalability with respect to the number of files, however, may be much more difficult to achieve, owing to a “long tail” of lukewarm or cold files for which the server may need to assume most or all of the delivery cost. This paper first addresses the question of how best to allocate server resources among multiple active torrents. We then propose new content delivery policies that use some of the available upload bandwidth from currently downloading clients to “inflate” torrents for files that would otherwise require substantial server bandwidth. Our performance results show that use of torrent inflation can substantially reduce download times, by more than 50% in some cases.

  • 26.
    Carlsson, Niklas
    et al.
    Linköping University, Department of Computer and Information Science, Database and information techniques. Linköping University, Faculty of Science & Engineering.
    Li, Xiaolin Andy
    University of Florida, USA.
    Singhal, Mukesh
    University of California, Merced, USA.
    Wang, Mea
    University of Calgary, Canada.
    Guest Editorial - Cloud and Big Data2014In: Journal of Communications, ISSN 1796-2021, E-ISSN 1796-2021, Vol. 9, no 4, p. i-iiiArticle in journal (Refereed)
    Abstract [en]

    In the current digital age, massive amounts of data are generated in many different ways and forms. The data may be collected from everything from personal web logs to purposefully placed sensors. Today, companies and researchers use this data for everything from targeted personalized ads based on social data to solving important scientific problems that may help future generations of word citizens. Regardless if measured in monetary profit or other measures, the value of this data has proven valuable for many purposes and has led us into the Big Data era. Due to the large volume of data, Big Data requires significant storage, processing, and bandwidth resources. To date, the Cloud provides the largest collection of disk storage, CPU power, and network bandwidth, which makes it a natural choice for housing the Big Data.

  • 27.
    Carlsson, Niklas
    et al.
    University of Saskatchewan.
    Mahanti, Anirban
    IIT Delhi, India.
    Li, Zongpeng
    University of Calgary.
    Eager, Derek L.
    University of Saskatchewan.
    Optimized Periodic Broadcast of Non-linear Media2008In: IEEE transactions on multimedia, ISSN 1520-9210, E-ISSN 1941-0077, ISSN 1520-9210, Vol. 10, no 5, p. 871-884Article in journal (Refereed)
    Abstract [en]

    Conventional video consists of a single sequence of video frames. During a client's playback period, frames are viewed sequentially from some specified starting point. The fixed frame ordering of conventional video enables efficient scheduled broadcast delivery, as well as efficient near on-demand delivery to large numbers of concurrent clients through use of periodic broadcast protocols in which the video file is segmented and transmitted on multiple channels. This paper considers the problem of devising scalable protocols for near on-demand delivery of “nonlinear” media files whose content may have a tree or graph, rather than linear, structure. Such media allows personalization of the media playback according to individual client preferences. We formulate a mathematical model for determination of the optimal periodic broadcast protocol for nonlinear media with piecewise-linear structures. Our objective function allows differing weights to be placed on the startup delays required for differing paths through the media. Studying a number of simple nonlinear structures we provide insight into the characteristics of the optimal solution. For cases in which the cost of solving the optimization model is prohibitive, we propose and evaluate an efficient approximation algorithm.

  • 28.
    Carlsson, Niklas
    et al.
    Linköping University, Department of Computer and Information Science, Database and information techniques. Linköping University, The Institute of Technology.
    Williamson, Carey
    University of Calgary, Canada.
    Hirt, Andreas
    University of Calgary, Canada.
    Jacobson, Micheal
    University of Calgary, Canada.
    Performance Modeling of Anonymity Protocols2012In: Performance evaluation (Print), ISSN 0166-5316, E-ISSN 1872-745X, Vol. 69, no 12, p. 643-661Article in journal (Refereed)
    Abstract [en]

    Anonymous network communication protocols provide privacy for Internet-based communication. In this paper, we focus on the performance and scalability of anonymityprotocols. In particular, we develop performance models for two anonymityprotocols from the prior literature (Buses and Taxis), as well as our own newly proposed protocol (Motorcycles). Using a combination of experimental implementation, simulation, and analysis, we show that: (1) the message latency of the Buses protocol is O(N2), scaling quadratically with the number of participants; (2) the message latency of the Taxis protocol is O(N), scaling linearly with the number of participants; and (3) the message latency of the Motorcycles protocol is O(log2N), scaling logarithmically with the number of participants. Motorcycles can provide scalable anonymous network communication, without compromising the strength of anonymity provided by Buses or Taxis.

  • 29.
    Dan, Gyorgy
    et al.
    KTH Royal Institute of Technology, Stockholm.
    Carlsson, Niklas
    Linköping University, Department of Computer and Information Science, Database and information techniques. Linköping University, The Institute of Technology.
    Centralized and Distributed Protocols for Tracker-based Dynamic Swarm Management2013In: IEEE/ACM Transactions on Networking, ISSN 1063-6692, E-ISSN 1558-2566, Vol. 21, no 1, p. 297-310Article in journal (Refereed)
    Abstract [en]

    With BitTorrent, efficient peer upload utilization is achieved by splitting contents into many small pieces, each of which may be downloaded from different peers within the same swarm. Unfortunately, piece and bandwidth availability may cause the file-sharing efficiency to degrade in small swarms with few participating peers. Using extensive measurements, we identified hundreds of thousands of torrents with several small swarms for which reallocating peers among swarms and/or modifying the peer behavior could significantly improve the system performance. Motivated by this observation, we propose a centralized and a distributed protocol for dynamic swarm management. The centralized protocol (CSM) manages the swarms of peers at minimal tracker overhead. The distributed protocol (DSM) manages the swarms of peers while ensuring load fairness among the trackers. Both protocols achieve their performance improvements by identifying and merging small swarms and allow load sharing for large torrents. Our evaluations are based on measurement data collected during eight days from over 700 trackers worldwide, which collectively maintain state information about 2.8 million unique torrents. We find that CSM and DSM can achieve most of the performance gains of dynamic swarm management. These gains are estimated to be up to 40% on average for small torrents.

  • 30.
    Dan, Gyorgy
    et al.
    Royal Institute of Technology, Stockholm, Sweden.
    Carlsson, Niklas
    Linköping University, Department of Computer and Information Science, Database and information techniques. Linköping University, The Institute of Technology.
    Chatzidrossos, Illias
    Royal Institute of Technology, Stockholm, Sweden.
    Efficient and Highly Available Peer Discovery: A Case for Independent Trackers and Gossiping2011In: 2011 IEEE International Conference on Peer-to-Peer Computing (P2P), IEEE , 2011, p. 290-299Conference paper (Refereed)
    Abstract [en]

    Tracker-based peer-discovery is used in most commercial peer-to-peer content distribution systems, as it provides performance benefits compared to distributed solutions, and facilitates the control and monitoring of the overlay. But a tracker is a central point of failure, and its deployment and maintenance incur costs; hence an important question is how high tracker availability can be achieved at low cost. We investigate highly available, low overhead peer discovery, using independent trackers and a simple gossip protocol. This work is a step towards understanding the trade-off between the overhead and the achievable peer connectivity in highly available distributed overlay-management systems for peer-to-peer content distribution. We propose two protocols that connect peers in different swarms efficiently with a constant, but tunable, overhead. The two protocols, Random Peer Migration (RPM) and Random Multi-Tracking (RMT), employ a small fraction of peers in a torrent to virtually increase the size of swarms. We develop analytical models of the protocols based on renewal theory, and validate the models using both extensive simulations and controlled experiments. We illustrate the potential value of the protocols using large-scale measurement data that contains hundreds of thousands of public torrents with several small swarms, with limited peer connectivity. We estimate the achievable gains to be up to 40% on average for small torrents.

  • 31.
    Dan, György
    et al.
    Royal Institute of Technology (KTH), Stockholm, Sweden.
    Carlsson, Niklas
    Linköping University, Department of Computer and Information Science, Database and information techniques. Linköping University, The Institute of Technology.
    Dynamic Content Allocation for Cloud-assisted Service of Periodic Workloads2014In: INFOCOM 2014, IEEE , 2014, p. 853-861Conference paper (Refereed)
    Abstract [en]

    Motivated by improved models for content workload prediction, in this paper we consider the problem of dynamic content allocation for a hybrid content delivery system that combines cloud-based storage with low cost dedicated servers that have limited storage and unmetered upload bandwidth. We formulate the problem of allocating contents to the dedicated storage as a finite horizon dynamic decision problem, and show that a discrete time decision problem is a good approximation for piecewise stationary workloads. We provide an exact solution to the discrete time decision problem in the form of a mixed integerlinear programming problem, propose computationally feasible approximations, and give bounds on their approximation ratios.Finally, we evaluate the algorithms using synthetic and measuredtraces from a commercial music on-demand service and give insight into their performance as a function of the workload characteristics.

  • 32. Dan, György
    et al.
    Carlsson, Niklas
    University of Calgary.
    Dynamic Swarm Management for Improved BitTorrent Performance2009In: Proc. International Workshop on Peer-toPeer Systems (IPTPS '09), Boston MA, April 2009.: (in conjunction with NSDI'09), 2009, p. 1-6Conference paper (Refereed)
  • 33. Dan, György
    et al.
    Carlsson, Niklas
    University of Calgary.
    Power-law Revisited: A Large Scale Measurement Study of P2P Content Popularity2010In: Proc. International Workshop on Peer-to-Peer Systems (IPTPS ’10), San Jose, CA, April 2010: (in conjunction with NSDI'10), 2010, p. 1-6Conference paper (Refereed)
  • 34.
    de Leng, Daniel
    et al.
    Linköping University, Department of Computer and Information Science, Artificial Intelligence and Integrated Computer Systems. Linköping University, Faculty of Science & Engineering.
    Tiger, Mattias
    Linköping University, Department of Computer and Information Science, Artificial Intelligence and Integrated Computer Systems. Linköping University, Faculty of Science & Engineering.
    Almquist, Mathias
    Linköping University, Department of Computer and Information Science. Linköping University, Faculty of Science & Engineering.
    Almquist, Viktor
    Linköping University, Department of Computer and Information Science. Linköping University, Faculty of Science & Engineering.
    Carlsson, Niklas
    Linköping University, Department of Computer and Information Science, Database and information techniques. Linköping University, Faculty of Science & Engineering.
    Second Screen Journey to the Cup: Twitter Dynamics during the Stanley Cup Playoffs2018In: Proceedings of the 2nd Network Traffic Measurement and Analysis Conference (TMA), 2018, p. 1-8Conference paper (Refereed)
    Abstract [en]

    With Twitter and other microblogging services, users can easily express their opinion and ideas in short text messages. A recent trend is that users use the real-time property of these services to share their opinions and thoughts as events unfold on TV or in the real world. In the context of TV broadcasts, Twitter (over a mobile device, for example) is referred to as a second screen. This paper presents the first characterization of the second screen usage over the playoffs of a major sports league. We present both temporal and spatial analysis of the Twitter usage during the end of the National Hockey League (NHL) regular season and the 2015 Stanley Cup playoffs. Our analysis provides insights into the usage patterns over the full 72-day period and with regards to in-game events such as goals, but also with regards to geographic biases. Quantifying these biases and the significance of specific events, we then discuss and provide insights into how the playoff dynamics may impact advertisers and third-party developers that try to provide increased personalization.

  • 35. Dvir, Amit
    et al.
    Carlsson, Niklas
    University of Calgary.
    Power-aware Recovery for Geographic Routing2009In: Proc. IEEE Wireless Communications and Networking Conference (WCNC ’09), IEEE , 2009, p. 2851-2856Conference paper (Refereed)
  • 36.
    Estévez, Alberto García
    et al.
    Universidad de Alcala de Henares, Spain .
    Carlsson, Niklas
    Linköping University, Department of Computer and Information Science, Database and information techniques. Linköping University, The Institute of Technology.
    Geo-location-aware Emulations for Performance Evaluation of Mobile Applications2014In: Wireless On-demand Network Systems and Services (WONS 2014), IEEE , 2014, p. 73-76Conference paper (Refereed)
    Abstract [en]

    This paper presents the design of a simple emulation framework for performance evaluation and testing of mobile applications. Our testbed combines production hardware and software to allow emulation of realistic and repeatable mobility scenarios, in which the mobile user can travel long distances, while being served by an application server. The framework allows (i) geo-location information, (ii) client network conditions such as bandwidth and loss rate, as well as (iii) the application workload to be emulated synchronously. To illustrate the power of the framework we also present the design, proof-of-concept implementation, and evaluation of a geo-smart scheduler for application updates in smartphones. This geo-smart scheduler reduces the average download time by using a network performance map to schedule the downloads when at places with relatively good conditions. Our trace-driven evaluation of the geo-smart scheduler, illustrates the workings of the emulation framework, and the potential of the geo-smart scheduler.

  • 37. Garg, Sanchit
    et al.
    Gupta, Trinabh
    Carlsson, Niklas
    University of Calgary.
    Mahanti, Anirban
    Evolution of an Online Social Aggregation Network: An Empirical Study2009In: Proc. ACM Internet Measurement Conference (IMC ’09), ACM , 2009, p. 315-321Conference paper (Refereed)
  • 38.
    Gill, Phillipa
    et al.
    University of Toronto.
    Arlitt, Martin
    HP Labs, Palo Alto.
    Carlsson, Niklas
    Linköping University, The Institute of Technology. Linköping University, Department of Computer and Information Science, Database and information techniques.
    Mahanti, Anirban
    NICTA.
    Williamson, Carey
    University of Calgary.
    Characterizing Organizational Use of Web-Based Services: Methodology, Challenges, Observations, and Insights2011In: ACM TRANSACTIONS ON THE WEB, ISSN 1559-1131, Vol. 5, no 4Article in journal (Refereed)
    Abstract [en]

    Todays Web provides many different functionalities, including communication, entertainment, social networking, and information retrieval. In this article, we analyze traces of HTTP activity from a large enterprise and from a large university to identify and characterize Web-based service usage. Our work provides an initial methodology for the analysis of Web-based services. While it is nontrivial to identify the classes, instances, and providers for each transaction, our results show that most of the traffic comes from a small subset of providers, which can be classified manually. Furthermore, we assess both qualitatively and quantitatively how the Web has evolved over the past decade, and discuss the implications of these changes.

  • 39.
    Gopinathan, Ajay
    et al.
    University of Calgary, Canada.
    Carlsson, Niklas
    Linköping University, Department of Computer and Information Science, Database and information techniques. Linköping University, Faculty of Science & Engineering.
    Li, Zongpeng
    University of Calgary, Canada.
    Wu, Chuan
    University of Hong Kong, Peoples R China.
    Revenue-maximizing and Truthful Online Auctions for Dynamic Spectrum Access2016In: 2016 12TH ANNUAL CONFERENCE ON WIRELESS ON-DEMAND NETWORK SYSTEMS AND SERVICES (WONS), IEEE , 2016, p. 1-8Conference paper (Refereed)
    Abstract [en]

    Secondary spectrum auctions have been suggested as a strategically robust mechanism for distributing idle spectrum to competing secondary users. However, previous work on such auction design have assumed a static auction setting, thus failing to fully exploit the inherently time-varying nature of spectrum demand and utilization. In this paper, we address this issue from the perspective of the primary user who wishes to maximize the auction revenue. We present an online auction framework that dynamically accepts bids and allocates spectrum. We prove rigorously that our online auction framework is truthful in the multiple dimensions of bid values, as well as bid timing parameters. To protect against unbounded loss of revenue due to latter bids, we introduce controlled preemption into our mechanism. We prove that preemption, coupled with the technique of inflating bids artificially, leads to an online auction that guarantees a 1/5-fraction of the optimal revenue as obtained by an offline adversary. Since the previous guarantee holds only for the optimal channel allocation, we further provide a greedy channel allocation scheme which provides scalability. We prove that the greedy scheme also obtains a constant competitive revenue guarantee, where the constant depends on the parameter of the conflict graph.

  • 40. Gupta, Trinabh
    et al.
    Garg, Sanchit
    Mahanti, Anirban
    Carlsson, Niklas
    University of Calgary.
    Arlitt, Martin
    Characterization of FriendFeed – A Web-based Social Aggregation Service2009In: Proc. AAAI International Conference on Weblogs and Social Media (ICWSM ’09), AAAI Press , 2009, p. 218-221Conference paper (Refereed)
  • 41.
    Hashemian, Raoufehsadat
    et al.
    University of Calgary, Canada.
    Krishnamurthy, Diwakar
    University of Calgary, Canada.
    Arlitt, Martin
    HP Labs, Palo alto, CA, USA.
    Carlsson, Niklas
    Linköping University, Department of Computer and Information Science, Database and information techniques. Linköping University, The Institute of Technology.
    Characterizing the Scalability of a Web Application on a Multi-core Server2014In: Concurrency and Computation: Practice and Experience, ISSN 1532-0626, Vol. 26, no 12, p. 2027-2052Article in journal (Refereed)
    Abstract [en]

    The advent of multi‒core technology motivates new studies to understand how efficiently Web servers utilize such hardware. This paper presents a detailed performance study of a Web server application deployed on a modern eight‒core server. Our study shows that default Web server configurations result in poor scalability with increasing core counts. We study two different types of workloads, namely, a workload with intense TCP/IP related OS activity and the SPECweb2009 Support workload with more application‒level processing. We observe that the scaling behaviour is markedly different for these workloads, mainly because of the difference in the performance of static and dynamic requests. While static requests perform poorly when moving from using one socket to both sockets in the system, the converse is true for dynamic requests. We show that, contrary to what was suggested by previous work, Web server scalability improvement policies need to be adapted based on the type of workload experienced by the server. The results of our experiments reveal that with workload‒specific Web server configuration strategies, a multi‒core server can be utilized up to 80% while still serving requests without significant queuing delays; utilizations beyond 90% are also possible, while still serving requests with ‘acceptable’ response times.

  • 42.
    Hashemian, Raoufehsadat
    et al.
    University of Calgary, Alberta, Canada .
    Krishnamurthy, Diwakar
    University of Calgary, Alberta, Canada .
    Arlitt, Martin
    HP Labs, Palo Alto, California, USA .
    Carlsson, Niklas
    Linköping University, Department of Computer and Information Science, Database and information techniques. Linköping University, The Institute of Technology.
    Improving the Scalability of a Multi-core Web Server2013In: ICPE '13 Proceedings of the 4th ACM/SPEC International Conference on Performance Engineering, Association for Computing Machinery (ACM), 2013, p. 161-172Conference paper (Refereed)
    Abstract [en]

    Improving the performance and scalability of Web servers enhances user experiences and reduces the costs of providing Web-based services. The advent of Multi-core technology motivates new studies to understand how efficiently Web servers utilize such hardware. This paper presents a detailed performance study of a Web server application deployed on a modern 2 socket, 4-cores per socket server. Our study show that default, "out-of-the-box" Web server configurations can cause the system to scale poorly with increasing core counts. We study two different types of workloads, namely a workload that imposes intense TCP/IP related OS activity and the SPECweb2009 Support workload, which incurs more application-level processing. We observe that the scaling behaviour is markedly different for these two types of workloads, mainly due to the difference in the performance characteristics of static and dynamic requests. The results of our experiments reveal that with workload-specific Web server configuration strategies a modern Multi-core server can be utilized up to 80% while still serving requests without significant queuing delays; utilizations beyond 90% are also possible, while still serving requests with acceptable response times.

  • 43.
    Hiran, Rahul
    et al.
    Linköping University, Department of Computer and Information Science, Database and information techniques. Linköping University, The Institute of Technology.
    Carlsson, Niklas
    Linköping University, Department of Computer and Information Science, Database and information techniques. Linköping University, The Institute of Technology.
    Gill, Phillipa
    University of Toronto, Canada.
    Characterizing Large-scale Routing Anomalies: A Case Study of the China Telecom Incident2013In: Passive and Active Measurement / [ed] Matthew Roughan, Rocky Chang, Springer Berlin/Heidelberg, 2013, p. 229-238Conference paper (Refereed)
    Abstract [en]

    China Telecom’s hijack of approximately 50,000 IP prefixes in April 2010 highlights the potential for traffic interception on the Internet. Indeed, the sensitive nature of the hijacked prefixes, including US government agencies, garnered a great deal of attention and highlights the importance of being able to characterize such incidents after they occur. We use the China Telecom incident as a case study, to understand (1) what can be learned about large-scale routing anomalies using public data sets, and (2) what types of data should be collected to diagnose routing anomalies in the future. We develop a methodology for inferring which prefixes may be impacted by traffic interception using only control-plane data and validate our technique using data-plane traces. The key findings of our study of the China Telecom incident are: (1) The geographic distribution of announced prefixes is similar to the global distribution with a tendency towards prefixes registered in the Asia-Pacific region, (2) there is little evidence for subprefix hijacking which supports the hypothesis that this incident was likely a leak of existing routes, and (3) by preferring customer routes, providers inadvertently enabled interception of their customer’s traffic.

  • 44.
    Hiran, Rahul
    et al.
    Linköping University, Department of Computer and Information Science, Database and information techniques. Linköping University, Faculty of Science & Engineering.
    Carlsson, Niklas
    Linköping University, Department of Computer and Information Science, Database and information techniques. Linköping University, Faculty of Science & Engineering.
    Shahmehri, Nahid
    Linköping University, Department of Computer and Information Science, Database and information techniques. Linköping University, Faculty of Science & Engineering.
    Crowd-based Detection of Routing Anomalies on the Internet2015In: Proc. IEEE Conference on Communications and Network Security (IEEE CNS), Florence, Italy, Sept. 2015., IEEE Computer Society Digital Library, 2015, p. 388-396Conference paper (Refereed)
    Abstract [en]

    The Internet is highly susceptible to routing attacks and there is no universally deployed solution that ensures that traffic is not hijacked by third parties. Individuals or organizations wanting to protect themselves from sustained attacks must therefore typically rely on measurements and traffic monitoring to detect attacks. Motivated by the high overhead costs of continuous active measurements, we argue that passive monitoring combined with collaborative information sharing and statistics can be used to provide alerts about traffic anomalies that may require further investigation. In this paper we present and evaluate a user-centric crowd-based approach in which users passively monitor their network traffic, share information about potential anomalies, and apply combined collaborative statistics to identify potential routing anomalies. The approach uses only passively collected round-trip time (RTT) measurements, is shown to have low overhead, regardless if a central or distributed architecture is used, and provides an attractive tradeoff between attack detection rates (when there is an attack) and false alert rates (needing further investigation) under normal conditions. Our data-driven analysis using longitudinal and distributed RTT measurements also provides insights into detector selection and the relative weight that should be given to candidate detectors at different distances from the potential victim node.

  • 45.
    Hiran, Rahul
    et al.
    Linköping University, Department of Computer and Information Science, Database and information techniques. Linköping University, Faculty of Science & Engineering.
    Carlsson, Niklas
    Linköping University, Department of Computer and Information Science, Database and information techniques. Linköping University, Faculty of Science & Engineering.
    Shahmehri, Nahid
    Linköping University, Department of Computer and Information Science, Database and information techniques. Linköping University, Faculty of Science & Engineering.
    Does Scale, Size, and Locality Matter?: Evaluation of Collaborative BGP Security Mechanisms2016In: 2016 IFIP NETWORKING CONFERENCE (IFIP NETWORKING) AND WORKSHOPS, IEEE , 2016, p. 261-269Conference paper (Refereed)
    Abstract [en]

    The Border Gateway Protocol (BGP) was not designed with security in mind and is vulnerable to many attacks, including prefix/subprefix hijacks, interception attacks, and imposture attacks. Despite many protocols having been proposed to detect or prevent such attacks, no solution has been widely deployed. Yet, the effectiveness of most proposals relies on largescale adoption and cooperation between many large Autonomous Systems (AS). In this paper we use measurement data to evaluate some promising, previously proposed techniques in cases where they are implemented by different subsets of ASes, and answer questions regarding which ASes need to collaborate, the importance of the locality and size of the participating ASes, and how many ASes are needed to achieve good efficiency when different subsets of ASes collaborate. For our evaluation we use topologies and routing information derived from real measurement data. We consider collaborative detection and prevention techniques that use (i) prefix origin information, (ii) route path updates, or (iii) passively collected round-trip time (RTT) information. Our results and answers to the above questions help determine the effectiveness of potential incremental rollouts, incentivized or required by regional legislation, for example. While there are differences between the techniques and two of the three classes see the biggest benefits when detection/prevention is performed close to the source of an attack, the results show that significant gains can be achieved even with only regional collaboration.

  • 46.
    Hiran, Rahul
    et al.
    Linköping University, Department of Computer and Information Science, Database and information techniques. Linköping University, Faculty of Science & Engineering.
    Carlsson, Niklas
    Linköping University, Department of Computer and Information Science, Database and information techniques. Linköping University, Faculty of Science & Engineering.
    Shahmehri, Nahid
    Linköping University, Department of Computer and Information Science, Database and information techniques. Linköping University, Faculty of Science & Engineering.
    PrefiSec: A Distributed Alliance Framework for Collaborative BGP Monitoring and Prefix-based Security2014In: Proc. ACM CCS Workshop on Information Sharing and Collaborative Security (ACM WISCS @CCS), ACM Digital Library, 2014, p. 3-12Conference paper (Refereed)
    Abstract [en]

    This paper presents the design and data-driven overhead analysis of PrefiSec, a distributed framework that helps collaborating organizations to effectively maintain and share network information in the fight against miscreants. PrefiSec is a novel distributed IP-prefix-based solution, which maintains information about the activities associated with IP prefixes (blocks of IP addresses) and autonomous systems (AS). Within PrefiSec, we design and evaluate simple and scalable mechanisms and policies that allow participating entities to effectively share network information, which helps to protect against prefix/subprefix attacks, interception attacks, and a wide range of edge-based attacks, such as spamming, scanning, and botnet activities. Timely reporting of such information helps participants improve their security, keep their security footprints clean, and incentivizes participation. Public wide-area BGP-announcements, traceroutes, and simulations are used to estimate the overhead, scalability, and alert rates. Our results show that PrefiSec helps improve system security, and can scale to large systems.

  • 47.
    Islam, M. Aminul
    et al.
    University of Saskatchewan, Canada.
    Eager, Derek
    University of Saskatchewan, Canada.
    Carlsson, Niklas
    Linköping University, Department of Computer and Information Science, Database and information techniques. Linköping University, The Institute of Technology.
    Mahanti, Anirban
    NICTA, Alexandria, NSW, Australia.
    Revisiting Popularity Characterization and Modeling for User-generated Videos2013In: Modeling, Analysis & Simulation of Computer and Telecommunication Systems (MASCOTS), 2013 IEEE 21st International Symposium, Institute of Electrical and Electronics Engineers (IEEE), 2013, p. 350-354Conference paper (Refereed)
    Abstract [en]

    This paper presents new results on characterization and modeling of user-generated video popularity evolution, based on a recent complementary data collection for videos that were previously the subject of an eight month data collection campaign during 2008/09. In particular, during 2011, we collected two contiguous months of weekly view counts for videos in two separate 2008/09 datasets, namely the ``recently-uploaded'' and the ``keyword-search'' datasets. These datasets contain statistics for videos that were uploaded within 7 days of the start of data collection in 2008 and videos that were discovered using a keyword search algorithm in 2008, respectively. Our analysis shows that the average weekly view count for the recently-uploaded videos had not decreased by the time of the second measurement period, in comparison to the middle and later portions of the first measurement period. The new data is used to evaluate the accuracy of a previously proposed model for synthetic view count generation for time periods that are substantially longer than previously considered. We find that the model yielded distributions of total (lifetime) video view counts that match the empirical distributions, however, significant differences between the model and empirical data were observed with respect to other metrics. These differences appear to arise because of particular popularity characteristics that change over time rather than being week-invariant as assumed in the model.

  • 48.
    James, Cyriac
    et al.
    University of Calgary.
    Carlsson, NIklas
    Linköping University, Department of Computer and Information Science, Database and information techniques. Linköping University, Faculty of Science & Engineering.
    Green Domino Incentives: Impact of Energy-aware Adaptive Link Rate Policies in Routers2015In: Proc. ACM/SPEC International Conference on Performance Engineering (ACM/SPEC ICPE), Association for Computing Machinery (ACM), 2015, p. 211-221Conference paper (Refereed)
    Abstract [en]

    To reduce energy consumption of lightly loaded routers, operators are increasingly incentivized to use Adaptive Link Rate (ALR) policies and techniques. These techniques typically save energy by adapting link service rates or by identifying opportune times to put interfaces into low-power sleep/idle modes. In this paper, we present a trace-based analysis of the impact that a router implementing these techniques has on the neighboring routers. We show that policies adapting the service rate at larger time scales, either by changing the service rate of the link interface itself or by changing which redundant heterogeneous link is active, typically have large positive effects on neighboring routers, with the downstream routers being able to achieve up-to 30% additional energy savings due to the upstream routers implementing ALR policies. Policies that save energy by temporarily placing the interface in a low-power sleep/idle mode, typically has smaller, but positive, impact on neighboring routers. Best are hybrid policies that use a combination of these two techniques. The hybrid policies consistently achieve the biggest energy savings, and have positive cascading effects on surrounding routers. Our results show that implementation of ALR policies can contribute to large-scale positive domino incentive effects, as they further increase the potential energy savings seen by those neighboring routers that consider implementing ALR techniques, while satisfying performance guarantees on the routers themselves.

  • 49.
    Keskisärkkä, Robin
    et al.
    Linköping University, Department of Computer and Information Science, Human-Centered systems. Linköping University, Faculty of Science & Engineering.
    Li, Huanyu
    Linköping University, Department of Computer and Information Science, Database and information techniques. Linköping University, Faculty of Science & Engineering.
    Cheng, Sijin
    Linköping University, Faculty of Science & Engineering. Linköping University, Department of Computer and Information Science, Database and information techniques.
    Carlsson, Niklas
    Linköping University, Department of Computer and Information Science, Database and information techniques. Linköping University, Faculty of Science & Engineering.
    Lambrix, Patrick
    Linköping University, Department of Computer and Information Science, Database and information techniques. Linköping University, The Institute of Technology.
    An Ontology for Ice Hockey2019In: ISWC 2019 Satellites: Proceedings of the ISWC 2019 Satellite Tracks (Posters & Demonstrations, Industry, and Outrageous Ideas) co-located with 18th International Semantic Web Conference (ISWC 2019), 2019, p. 13-16Conference paper (Refereed)
    Abstract [en]

    Ice hockey is a highly popular sport that has seen significant increase in the use of sport analytics. To aid in such analytics, most major leagues collect and share increasing amounts of play-by-play data and other statistics. Additionally, some websites specialize in making such data available to the public in user-friendly forms. However, these sites fail to capture the semantic information of the data, and cannot be used to support more complex data requirements. In this paper, we present the design and development of an ice hockey ontology that provides improved knowledge representation, enables intelligent search and information acquisition, and helps when using information from multiple databases. Our ontology is substantially larger than previous ice hockey ontologies (that cover only a small part of the domain) and provides a formal and explicit representation of the ice hockey domain, supports information retrieval, data reuse, and complex performance metrics.

  • 50.
    Krishnamoorthi, Vengatanathan
    et al.
    Linköping University, Department of Computer and Information Science, Database and information techniques. Linköping University, The Institute of Technology.
    Bergström, Patrik
    Linköping University, Department of Computer and Information Science. Linköping University, The Institute of Technology.
    Carlsson, Niklas
    Linköping University, Department of Computer and Information Science, Database and information techniques. Linköping University, The Institute of Technology.
    Eager, Derek
    University of Saskatchewan, Canada.
    Mahanti, Anirban
    NICTA, Australia.
    Shahmehri, Nahid
    Linköping University, Department of Computer and Information Science, Database and information techniques. Linköping University, The Institute of Technology.
    Empowering the Creative User: Personalized HTTP-based Adaptive Streaming of Multi-path Nonlinear Video2013In: Computer communication review, ISSN 0146-4833, E-ISSN 1943-5819, Vol. 43, no 4, p. 591-596Article in journal (Refereed)
    Abstract [en]

    This paper presents the design, implementation, and validation of a novel system that supports streaming and playout of personalized, multi-path, nonlinear video. In contrast to regular video, in which the file content is played sequentially, our design allows multiple nonlinear video sequences of the underlying (linear) video to be stitched together and played in any personalized order, and clients can be provided multiple path choices. The design combines the ideas of HTTP-based adaptive streaming (HAS) and multi-path nonlinear video. Personalization of the content is achieved with the use of a customized metafile, which is downloaded separately from the underlying media and the manifest file that defines the HAS structure. An extension to the user interface allows path choices to be presented to and made by the user. Novel buffer management and prefetching policies are used to ensure seamless uninterrupted playback regardless of client path choices, even under scenarios in which clients defer their choices until the last possible moment. Our solution allows creative home users to easily create their own multi-path nonlinear video, opening the door to an endless possibility of new opportunities and media forms.

12 1 - 50 of 80
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • oxford
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf