Many fundamental processes of change are taking place in the Internet and its applications. The topic of this year’ seminar is to ask how the ongoing evolution of networking will change the threats and security requirements in the Internet, and how technology can meet these requirements .
One major development is the move to cloud computing. Computing resources are sold on-demand, and services need to take advantage of the new utility computing model for fast deployment and scalability. Also, mobile apps and desktop applications are increasingly distributed between client devices and online servers. This means that the security infrastructure and services need to be scalable and cope with new kind of distribution.
Another trend is that revolutionary developments in applications have overtaken the relatively slow progress in the underlying network layers. The new applications, such as Facebook and other social networking services, run entirely within one company’s application servers. This is quite different from the traditional open application architecture, such as email or the web, where distributed servers talk to each with standard protocols. As a consequence, the open Internet may be turning into a collection of semi-closed services controlled by individual businesses. This may have profound effect on the application security models and privacy controls.
Nevertheless, network protocols and link-layer access technologies are also evolving. Future Internet research has produced visions for the long-term development of network architectures, for example data-centric networking and software-controlled switching. There are still many open questions regarding the security of these architectures. A more immediate development is the deployment of ever faster wireless access technologies, which provide ubiquitous network access and enable context-aware services, and faster wired access links, which may change the balance of power especially in the area of unwanted traffic.
In this year’s network security seminar, we invite the students and tutors to think about the new security threats and opportunities created by such changes in network technologies and applications.
Security mechanisms for inter-domain routing. Inter-domain routing means routing between different autonomic systems (AS) each of which in practice corresponds to a different Internet Service Provider (e.g. AT&T, TeliaSonera). Internally, each AS can route its traffic as it wants but must agree to a common protocol when routing traffic to and from the AS. In practice, the protocol used is BGP. Experience shows that basic BGP has potential do a lot of damage if no security measures are deployed (see [1,2]). Several kinds of security mechanisms have been proposed and the goal is to survey them and analyze the strengths and weaknesses. A good starting point is the recent paper in Sigcomm [3].
References:
Tutor: Matti Siekkinen
The Internet is evolving itself by extending to Mobile domain. Mobile Internet enables mobile users to access Internet services through mobile networks. Currently, cellular networks are converging with various wireless networks, e.g., WLAN, MANET, which forms a heterogeneous mobile environment. In such environment, it has been a big challenge for mobile operators and mobile/internet service providers to offer seamless roaming and services among the heterogeneous networks. A trustworthy identity management is expected in such an environment. Previous work on solving the problem falls in specific network circumstance, which cannot be easily and widely applied in the heterogeneous environment. Mobile cloud computing further complicates this issue. How to build up trusted identity management to achieve seamless roaming, provide smooth mobile internet services etc. becomes a crucial issue and plays a foundation for developing future internet applications and services.
The goal of this study is to survey identity management solutions and develop a trustworthy identity management for mobile cloud computing.
References: provided after the topic is assigned.
Tutor: Zheng Yan
People's life has been totally changed by the fast growth of Internet. It provides an incentive platform for remote communications, networking, computing and services. It carries a vast range of information resources and services, such as the World Wide Web and email. It also gives a birth to a wide range of applications, e.g., Voice over Internet Protocol (VoIP), Internet Protocol television (IPTV), Instant Messaging (IM), E-Commerce, Blog, and social networking. However, at the same time people collect various information from the Internet, they could also receive a number of unwanted traffics, such as malware, spam, spyware, intrusions, and unsolicited commercial advertisement or contents, etc. Those unexpected or harmful information could intrude people’s devices, occupy device memory spaces, waste their time and irritate usage experience. Some malicious traffic (e.g., virus) has a fast infection speed and thus can be spread over the network quickly, but costs little on its source.
The unwanted traffics burden both users and internet service providers, however their source could benefit from this business (e.g., through propagating commercial advertisements for its business customers). However, it also offers an easy channel to distribute various contents that could be unwanted by users. How to control or filter unwanted traffics for internet users is an important issue needed to solve.
A survey of unwanted traffic control technologies over Internet is preferred. It could be nice the candidate can classify the existing technologies into a number of categories based on a research model (self-developed by himself).
References: provided after the topic is assigned.
Tutor: Zheng Yan
Google's business model is collecting Internet users' usage information (e.g., through search behaviors) in order to grasp their preference, habits, willingness, etc., thus it can provide suitable advertisements. By losing somehow their private information (e.g., disclosing "what I want to find"), the users gain free Google services.
In addition, when an Internet user interacts/access internet (e.g., social networking, VoIP, Blogging, instant messaging), his/her activities or personal information can be automatically collected by the third party, how to enhance the future internet user' privacy as his/her preference is an interesting topic worth our study.
I goal of the study is that the candidate can discuss the privacy issue of future internet, how serious it is, how do users think about it (especially the Google and Facebook's business model). What kind of solution could be needed and expected by future internet users. What are innovated ideas related to this?
References: provided after the topic is assigned.
Tutor: Zheng Yan
Linux Containers take a completely different approach than system virtualization technologies such as KVM and Xen, which started by booting separate virtual systems on emulated hardware and then attempted to lower their overhead via paravirtualization and related mechanisms. Instead of retrofitting efficiency onto full isolation, LXC started out with an efficient mechanism (existing Linux process management) and added isolation, resulting in a system virtualization mechanism as scalable and portable as chroot, capable of simultaneously supporting thousands of emulated systems on a single server while also providing lightweight virtualization options to routers and smart phones.
LXC gets a lot of attention today in Linux Security world as a light-weight alternative to virtualization and application isolation. However, it does have a number of challenges and difficulties that so far were not resolved.
The goal of the seminar paper would be to get understanding of LXC, its goals, advantages and disadvantages compare to other approaches and think about the best use cases for this technology. Is it a good technology to be used in a cloud for example?
Note: Topic requires background in Linux OS, including Linux OS kernel.
References:
Tutor: Elena Reshetova
KVM (for Kernel-based Virtual Machine) is a full virtualization solution for Linux on x86 hardware containing virtualization extensions (Intel VT or AMD-V). It consists of a loadable kernel module, kvm.ko, that provides the core virtualization infrastructure and a processor specific module, kvm-intel.ko or kvm-amd.ko. KVM also requires a modified QEMU although work is underway to get the required changes upstream. Using KVM, one can run multiple virtual machines running unmodified Linux or Windows images. Each virtual machine has private virtualized hardware: a network card, disk, graphics adapter, etc.
The kernel component of KVM is included in mainline Linux, as of 2.6.20. The goal of the seminar paper would be to get understanding of KVM, its goals, advantages and disadvantages compare to other approaches and think about the best use cases for this technology.
Note: Topic requires background in Linux OS, including Linux OS kernel.
References:
Tutor: Elena Reshetova
Xen is one of the oldest and most used virtualization solutions. It is used by many cloud solutions such as Amazon EC2, Nimbus and Eucalyptus.
The Xen® hypervisor, the powerful open source industry standard for virtualization, offers a powerful, efficient, and secure feature set for virtualization of x86, x86_64, IA64, ARM, and other CPU architectures. It supports a wide range of guest operating systems including Windows®, Linux®, Solaris®, and various versions of the BSD operating systems.
The goal of the seminar paper would be to get understanding of Xen, its goals, advantages and disadvantages compare to other approaches and think about the best use cases for this technology.
References:
Tutor: Elena Reshetova
The goal of this task is to study the security issues in HTML5. The work should focus on those issues that are relevant for the development of a BitTorrent based video streaming client. Our prototype of such client makes heavy use of new HTML5 features such as web sockets, web workers, web storage API, and P2P API. One of the concerns slowing down the implementation of these HTML5 features in browsers is their security implications. So the task would be to perform a literary study of the security issues in these features and pay special attention to those aspects that are relevant for the P2P video streaming use case. It would be good to estimate if these issues have any serious implications to the solution prototype we have and look for ways to mitigate eventual risks with proper design decisions.
References: provided after the topic is assigned.
Tutor: Jukk Nurminen
Traditionally methods of data distribution in p2p networks lacks of security (such as BitTorrent) and vulnerable to different sets of threats. To overcome the problem such protocols as TOR [2] and Freenet [3] were introduced. Those protocols introduce different end-to-end privacy methods for the users. However, these protocols lack of performance and were unable to compete with BitTorrent-like protocols. In order to overcome and introduce more flexible trade-off between completely secured and completely unsecured protocols OneSwarm was introduced [1].
This topic is for the students who is willing to study threat problems in p2p networks, analyze solutions and threats and who is willing to discuss and think through the topic much deeper, with concern about future evolution of such protocols in the Internet.
References:
Tutor: Andrey Lukyanenko
This topic is about TCP protocol and understanding of the security problems as well as evolution of the solutions over the time. In this work the student has to study various TCP versions and threats that the protocols is vulnerable to. The latter includes such threats related to TCP (in some cases as well to IP) as SYN flooding, IP spoofing, TCP sequence number attack, TCP session hijacking [2], as well as more “recent” threats such as manipulation attacks [3].
Additionally, students needs to discuss the trends in TCP protocol security, security solutions in connection to the future protocol needs (conventional, MPTCP, DCTCP, ECN, and so on).
References:
Tutor: Andrey Lukyanenko
The Internet, has millions of websites and each user connected to the Internet normally interacts with thousands (consciously or unconsciously) of those over HTTP protocol. This protocol is especially vulnerable to eavesdropping. In order to make communications more secure HTTPS protocol is used. There are plenty examples of those: https://gmail.com, https://google.com, and so on. However, not every computer is able to handle many TLS connections fast enough.
In this topic, the student will need to study TLS (HTTPS) protocols, discuss its importance, practical and future use (study what protocols start to use TLS in newer ways). What is its usage in contexts of social networks and data multicast. Can HTTPS substitute HTTP completely?
References:
Tutor: Andrey Lukyanenko
Different publishing platforms and content management systems are very commonly used in internet. This results in very large numbers of sites running on the same platforms. If there is a DoS vulnerability in the publishing platform, a potentially huge number of sites could also be vulnerable.
The main element in this topic is to make a survey of the most significant publishing platforms and their known DoS weaknesses. This information can be further used to categorise the typical weaknesses and to create guidelines on how to avoid the DoS vulnerabilities. Impact estimation and testing methods may also be covered.
References:
Tutor: Aapo Kalliola
Botnets are currently the main source of spam, DDoS attacks and other malicious activity. There are (at least) hundreds of botnets with varying number of bots, with largest botnets having hundreds of thousand or millions of bots. Still, gaining information about real-life botnets is difficult, as they typically seek to avoid detection.
The partial purpose of this topic is to survey the current global botnet situation, form a view of the botnet capabilities and methods/expenses of gaining access to botnet capabilities. In addition to analysing the end-user point of view to botnets the topic also looks into means of defeating botnets. For this purpose the takedown mechanisms and the surrounding legal issues need to be covered. Perhaps a generalized mechanism for botnet detection and takedown can be derived from the analysis.
References:
Tutor: Aapo Kalliola
While real-life botnets are somewhat difficult to grasp, getting plausible (not real, but close enough) botnet data for simulations may be very useful. Assumptions about botnets in research are often quite vague.
In this topic the purpose is to analyse the spreading mechanisms and strategies of current botnets and to look into different possibilities on how to model a botnet and link the model to real-life relevant data, for instance to geographic IP address data. Additionally, the botnets have different capabilities and geographic network and bot capability limitations may result in varying impact levels to different targets. The overall target is to evaluate different approaches to creating plausible real-life linkable virtual botnets.
References:
Tutor: Aapo Kalliola
Task:Survey the literature for different ways to name or address hosts in the Internet in a secure way. Both standardized (DNSsec, Web certificates, HIP) and research-oriented technologies (i3, layered naming architecture, etc) are in the scope of the survey. Categorize and characterize the different designs.
References: provided after the topic is assigned.
Tutor: Miika Komu
Task:Steam is a popular game publishing platform for games from Valve. It also supports DRM via online registration. Find out how steam works and what kind of technology is under the hood.
References: provided after the topic is assigned.
Tutor: Miika Komu
Address / location privacy in existing and emerging networks: One of the fundamental privacy issues on the physical / link layer is the threat of persons being tracked based on the mobile devices they are carrying around, as single devices or as personal area networks. In the standardization domain, one of the first forays into solving this problem is private link addressing in Bluetooth 4.0, where devices can be found by paired devices while still remaining anonymous to the casual observer.
References: provided after the topic is assigned.
Tutor: Jan-Erik Ekberg
With cognitive radio technologies, the radio spectrum will in the future simultaneously be used by incumbents (e.g. television channels) and by either licensed temporary users (say radio microphones) and cognitive devices that are allowed to utilize the spectrum in a given place and time, whenever incumbent or licensed users / devices are not utilizing the spectrum. A server / database infrastructure is envisioned that will track and grant bandwidth use in frequency / location and time.
E.g. the back-off requirement is difficult to realize due to a mismatch between legislatory requirements -- incumbent radio usage will not be upgraded, licensed use may require devices to present a license, but the back-off should be performed by devices not having the license. Thus it seems plausible that cognitive radio devices would benefit from stocking a tightly integrated secure environment combined with the radio firmware. The topic is to explore this research opportunity in light of published research in security related to cognitive radio.
References: provided after the topic is assigned.
Tutor: Jan-Erik Ekberg
Smart cards are widely used as hardware security modules for authentication and access control. One of the key characteristics of a smartcard is that it encapsulates some security-sensitive information, such as cryptographic key or stored value in a physical token. This has been particularly important at a time when data communication networks were not widely available and, for example, private signature keys could not be stored in an online service or user authentication could not be based on an online trusted party. The ubiquitous access to online services puts the role of such offline security modules into question. The goal of this seminar paper is to survey applications where smartcards are deployed and to consider how the role of the smartcard will change with the availability of network connections and online services.
References:
Tutor: Sandeep Tamrakar
Traditionally private individuals used cash or cheques for small value payments. Today, cheques are disappearing from europe. Also, point-of-sale terminals ties merchants to the financial institution such as bank, credit card company, and are often expensive for occasional traders. So, occasional sellers such as open market sellers, summer sellers or person who sell a handfull of goods face-to-face has to rely on cash payment. Now with the ubiquitously available wireless Internet and people carring Internet-enable personal devices everywhere they go, such micro payments could be made not using Internet based services and Internet-enabled devices. The goal of this seminar paper is to survey different small value payment systems and study the feasibility of using them into practice in case of Micro-merchant transactions.
References:
Tutor: Sandeep Tamrakar
The student will give an overview of technologies that allow clients to use computational facilities of another party with the data remaining encrypted through the whole process. The student will give a short overview of the theoretical background and then compare at least 2-3 existing technologies.
References:
Tutor: Vilen Looga
Student will give an overview of existing privacy control schemes in geosocial and ad hoc networks.
References:
Tutor: Vilen Looga
References:
Tutor: Kari Kostiainen
References: provided after the topic is assigned.
Tutor: Kari Kostiainen
Tutor: Kari Kostiainen
When a company considers outsourcing data storage to a cloud service provider, one of the main concerns is whether the storage service is reliable. Even if the storage service promises five-nines uptime and plentiful data replication, these features are not visible to the users until a catastrophic crash happens. So, how does the customer of a cloud storage service know that the data is stored in a reliable way, or stored at all? Some interesting secure auditing solutions have been proposed for this purpose, and there may also be practical solutions.
References:
Tutor: Tuomas Aura