While GPS long has been an industry standard for localization of an entity or person anywhere in the world, it loses much of its accuracy and value when used indoors. To enable services such as indoor navigation, other methods must be used. A new standard of the Wi-Fi protocol, IEEE 802.11mc (Wi-Fi RTT), enables distance estimation between the transmitter and the receiver based on the Round-Trip Time (RTT) delay of the signal. Using these distance estimations and the known locations of the transmitting Access Points (APs), an estimation of the receiver’s location can be determined. In this thesis, a smartphone Wi-Fi RTT based Indoor Positioning System (IPS) is presented using an Unscented Kalman Filter (UKF). The UKF using only RTT based distance estimations as input, is established as a baseline implementation. Two extensions are then presented to improve the positioning performance; 1) a dead reckoning algorithm using smartphone sensors part of the Inertial Measurement Unit (IMU) as an additional input to the UKF, and 2) a method to detect and adjust distance measurements that have been made in Non-Line-of-Sight (NLoS) conditions. The implemented IPS is evaluated in an office environment in both favorable situations (plenty of Line-of-Sight conditions) and sub-optimal situations (dominant NLoS conditions). Using both extensions, meter level accuracy is achieved in both cases as well as a 90th percentile error of less than 2 meters.
Virtual reality (VR) is a medium of human interaction which is becoming more popular by the day in today's technological advancements. The applications are being developed at the same rate as the technology itself and we have only seen the start of the possible benefits it could bring society. As the technology advances it will gain a lot of trust, and the potential use cases of virtual environments will be allowed to become more complex. Already today, they often involve network streaming components which often has very strict optimization requirements in order to be able to run in real-time with minimal delay under normal network conditions. In order to reach the required optimizations it is important to understand how users interact with such virtual environments. To support and facilitate the understanding of this kind of interaction we have developed a method for creating qualitative datasets containing extensive information about the 3D scene as well as the sensor data from the head-mounted display (HMD). We then apply this method to create a sample dataset from a virtual 3D environment and try to analyze the data collected through some simple methods for demonstrational purposes.