Skip Nav

Thesis titles in Computer Science

Image processing :

❶Essentially, the specifications define a function from the set of all possible data input to the data output by your program. Scholars may feel it attracted to take up a thesis topic in cloud computing.

Honours/Masters by Coursework Thesis Coordinator

networking :
Minor research thesis (45cp, duration – two semesters)

Based on the topic, we can identify the estimated cost for overall research. Generally, if we choose a topic in Pervasive computing, it always cost little high ten other domain due to the use of devices like sensors, cameras etc. So, we also suggest our scholar the approximate cost based on the topic. If sometimes, scholars wish to work in low budget but select domains like pervasive computing, we help them to choose the topic with optimum cost.

We also consider cost as a factor while selecting a topic. We give complete focus on thesis topic selection as we believe that one should be ingenious from the very beginning. A scholar must select a topic correctly as they are the one who is going to face their internal guides and external reviewers. Scholars dedication and interest is the prime factor to take up a research topic.

Thesis Topics for Computer Science Students is a free guidance which we support for our scholars. Our team of experts will assist the scholars to choose the most recent and apt topic for their research. Our advisers are abreast of the current developments and trends in the field of research who can guide you the best.

We advice our scholars to take up a right decision as it is the right time to approach us. Thesis Topics for Computer Science Students. We work on the following domains:. We cannot confine our domain list and knowledge in this short page. We wish our scholars to approach us to know about our knowledge and experience in each domain.

We have a list of sub domains under each domain which in turns contain list of topics. The approach proposed for this thesis consists of the following steps: The aim of web services is to make data resources available over the Internet to applications programs written in any language. There are two approaches to web services: RESTful Web services have now been recognized as generally the most useful methods to provide data-services for web and mobile application development.

The aim of the thesis is to study the concept of RESTful web services in depth and to construct a catalogue of patterns for designing data-intensive web services. The aim of the catalogue is to act as a guide for practical design of web services for application development. The rationale behind this research is a need for a practical system that can be used by students to select subjects during their study.

While the advice of course coordinator and the short description of the subject in the handbook are most frequently used by students to make up their mind, they can make more informed decisions by using experience of past students. In this thesis, the student will use Case Based Reasoning CBR to design and develop a recommender system for subject selection in higher education context. The research component of this project is the identification and validation of the CBR approach and its parameters for the recommendation system.

They also bring with them various risks by facilitating improper users' behaviors. In this study, the student will select one type of improper behaviors in OSNs cyber-bullying, cyber-stalking, hate campaign etc. The outcome of this research is a strategy or a policy that can be considered by OSNs providers. Constructive alignment CA is a subject design concept used in higher education sector. In this thesis, the student will review educational technology methods and tools that have been used in higher education sector.

Data stream mining is today one of the most challenging research topic, because we enter the data-rich era. This condition requires a computationally light learning algorithm, which is scalable to process large data streams. Furthermore, data streams are often dynamic and do not follow a specific and predictable data distribution.

A flexible machine learning algorithm with a self-organizing property is desired to overcome this situation, because it can adapt itself to any variation of data streams. Evolving intelligent system EIS is a recent initiative of the computational intelligent society CIS for data stream mining tasks. It features an open structure, where it can start either from scratch with an empty rule base or initially trained rule base. Its fuzzy rules are then automatically generated referring to contribution and novelty of data stream.

In this research project, you will work on extension of existing EISs to enhance its online learning performance, thus improving its predictive accuracy and speeding up its training process. Research directions to be pursued in this project is to address the issue of uncertainty in data streams. The era of big data refers to a scale of dataset, which goes beyond capabilities of existing database management tools to collect, store, manage and analyze.

Although the big data is often associated with the issue of volume, researchers in the field have found that it is inherent to other 4Vs: Variety, Velocity, Veracity, Velocity, etc. Various data analytic tools have been proposed. The so-called MapReduce from Google is among the most widely used approach. Nevertheless, vast majority of existing works are offline in nature, because it assumes full access of complete dataset and allows a machine learning algorithm to perform multiple passes over all data.

In this project, you are supposed to develop an online parallelization technique to be integrated with evolving intelligent system EIS. Moreover, you will develop a data fusion technique, which will combine results of EIS from different data partitions. Existing machine learning algorithm is always cognitive in nature, where they just consider the issue of how-to-learn.

One may agree the learning process of human being always is always meta-cognitive in nature, because it involves two other issues: Recently, the notion of the metacognitive learning machine has been developed and exploits the theory of the meta-memory from psychology.

The concept of scaffolding theory, a prominent tutoring theory for a student to learn a complex task, has been implemented in the metacognitive learning machine as a design principle of the how-to-learn part. This project will be devoted to enhance our past works of the metacognitive scaffolding learning machine. It will study some refinements of learning modules to achieve better learning performances. Undetected or premature tool failure may lead to costly scrap or rework arising from impaired surface finishing, loss of dimensional accuracy or possible damage to the work-piece or machine.

The issue requires the advancement of conventional TCMSs using online adaptive learning techniques to predict tool wear on the fly. The cutting-edge learning methodologies developed in this project will pioneer frontier tool-condition monitoring technologies in manufacturing industries. Today, we confront social media text data explosion. From these massive data amounts, various data analytic tasks can be done such as sentiment analysis, recommendation task, web news mining, etc.

Because social media data constitute text data, they usually involve high dimensionality problem. For example, two popular text classification problems, namely 20 Newsgroup and Reuters top have more than 15, input features. Furthermore, information in the social media platform is continuously growing and rapidly changing, this definitely requires highly scalable and adaptive data mining tools, which searches for information much more than the existing ones used to do — evolving intelligent system.

The research outcome will be useful in the large-scale applications, which go beyond capabilities of existing data mining technologies. This project will not only cope with the exponential growth of data streams in the social media, but also will develop flexible machine learning solution, which adapts to the time-varying nature of the social media data.

Big data is too large, dynamic and complex to capture, analyse and integrate by using the currently available computing tools and techniques. By definition, it can be characterized by five V's: Big data collection, integration and storing are the main challenges of this project as the integration and storing of big data requires special care. Consequently, it is necessary to prevent possible data loss in between the collection and processing, as big data always comes from a great verity of sources, including the high volume of streaming data of dynamic environmental data e.

As such, it opens new scientific research directions for the development of new underlying theories and software tools, including more advanced and specialized analytic.

However, most of the big data technologies today e. In order to integrate big data from various sources with different variety and velocity and build a central repository accordingly, it is increasingly important to develop a new scientific methodology, including new software tools and techniques.

In particular, the main focus of this project is to capture, analyse and integrate big data from different sources, including dynamic streaming data and static data from database.

Towards this end, Government data can be used to analyse and develop applications and tools which can ensure benefit to the society. In recent years, electronic health services are increasingly used by patients, healthcare providers, healthcare professionals, etc.

Healthcare consumers and providers have been using a verity of such services via different technologies such as desktop, mobile technology, cell phone, smartphone, tablet, etc. For example, eHealth service is used in Australia to store and transmit the health information of the users in one secure and trusted environment. However, security is still a big challenge and central research issue in the delivery of electronic health services. For example, in an emergency situation i.

In addition to security issue, privacy is also a concern that should neo be compromised, especially when there is a need to ensure security. The main aim of this project is to enable online right-time data analysis and statistical functions to generate the different reports that are required for collaborative decision making.

This collaborative DSS will be built on an underlying integrated data repository which captures the different data sources relevant to the different organisations in the collaborative environment. Within the DSS, some measurements relevant to individual organisation e.

The main focus of the collaborative decision support system is the availability of heterogenous consolidated data at the right time and right place. With the increase popularity large heterogenous data repository and corporate data warehousing, there is a need to increase the efficiency of queries used for analysis. This case is even stronger in database environment that holds both spatial and temporal information. Spatio-Temporal data includes all time slices pertinent to each object or entity.

However, for each particular area there will be spatial information coordinates, shape, etc and time slice when a set of values for the above properties are valid. The main focus of this topic is to investigate the ways to optimize queries that are used to analyse the above spatio-temporal data. There is a famous one liner by Donald Rumsfeld. One of the big problems faced by designers is.

So, what does this mean for system development and design? Can this be formalized? Do we do it already? Where does Domain Expertise come into this? Technology is changing, however, the small system you build today may still be in use in 50 years from now. OK, you are the Government of a country developing a new Social Welfare system [2]. You want it to survive for the next 50 years. What exactly does this mean? How would you do this?

Very expensive systems survive for decades even if or especially when they are mission critical. Find some important examples. We know a lot about component based design, software re-use and related issue.

How do we bring all this together so that systems can deal with change? In today's world, web searches are major activity undertaken by people for industrial, research and other reasons. They involve searches across a very wide range of web pages in a wide range of sources. The searcher may down load pages, extract information from pages, and, in the process, create a history of link activations.

The problem people face is what happens if the searcher has to stop, and resume the process days later. Purpose of this project is to provide support for people using Google as a search engine. A major problem here is keeping track of the sites and documents visited and viewed, and downloaded.

Search's may be spread over several sessions, and users need to be able to resume a search. The project requires the development of software and the design of a complete, zero adoption cost tool. Given the current IT situation, web-sites are likely to be a major aspect of businesses and organisations for the next 50 years. However, horror stories abound. What are the categories of knowledge that web-site developers should have, if this new business aid is to be genuinely socially useful aid, rather than a public nuisance?

The goal here is the development of degree program, and, the documentation of knowledge collections needed. How often does a Google search produce stuff that seems to have no relationship to what you really wanted!? The purpose of this project is to find some simple means of improving browser precision. That is, I want to find only items which are really useful to me. To do this, we need to first explore the current query systems, and document them.

Then to propose means of getting simpler results, and to implement a prototype. You will need to develop a knowledge of "data mining", to extract classification material from the returns from browsers. So, I seem to spend ages linking around hyperspace to see information which would easily fit on one page. Could I build a tool which would allow a user to define a new, single page, that had all the data concerned?

The digital camera industry has put a lot of work into image quality assessment, both subjective and objective. Image quality is of course a major concern in the gaming industry, however, they face the problem of high-speed image generation, rather than simply recording images. As I said, the digital camera domain has various measure of image quality.

How do they map onto the needs of games, or don't they? If not, what should we do? OK, how much time do you waste learning to use a new software package? And, how many computer systems that you know of invisible, or nearly invisible in the sense that they assist you BUT, don't intrude on your non-computer work patterns? Simple examples that you may be familiar with are ABS, Traction Control, automobile engine management systems.

But, what other ones can you think of? Of course, this sounds like ubiquitous computing, however, we are going beyond this. Our goal is the production of systems which can be installed in a work environment, either computerised or not, and have almost zero learning effort, but, which will make life easier. What should the design-rules look like for a system of the ZAIA type look like?

One way of doing this would be to design and demonstrate such a tool, such as tabbase [3]. Currently, library catalog systems place a massive load on users who must make many steps to locate an item. In practice, users identify a "Target", that is, in three ways. For a start, we are interested in the first case.

Exactly how can a Cloud based systems be used in Component Based design? Develop design rules, and show some case-studies. It has been suggested by the author that one way of improving product quality is by building a wrapper around a system that blocks those case that were found to be handled incorrectly during testing.

This project investigates this idea and proposes and demonstrates deign rules and tools for achieving this for different classes of software product. How will developers test cloud applications? What exactly will the changes to Black Box testing for example be for clouds? This project requires analysis of both Cloud systems and testing and also of the way in which Clouds will work?

The thesis topic is to survey this field, define the problem, and produce a simple tool, if possible. In addition, the value of data at specific points in the execution process might be checked against expected values. However, this requires that there is a close correlation between the design representation and the code. That is, it needs to be possible to easily and preferably automatically , to generate the "hooks" in the code that will make this possible.

The tester wants to be able to specify a test script in terms of the names used in the design, and have the harness execute the code, doing the required checking. Discussions of specification capture raise the issue of a suitable language for system specification of software systems.

The goal is to develop a suitable language and to discuss and formalize the issues involved. The product was initially developed at La Trobe. Its use is to analyse qualitative data, to find common ideas and threads of evidence. This project would look at its use in Requirements Engineering. The purpose of this project is to investigate what, for the want of a better name, I call the linguistic aspects of re-use.

Re-use is the process of using existing components to fabricate a system. A major part of the problem apart from the issue of the existence of re-useable components , is the problem of classifying and then retrieving the components.

Much emphasis is often placed on the classification of the components. However, experience is that components can be hard to retrieve since the classifications do not always match either the application domain or the possible purpose proposed for the module.

Alternatively, there may be some implied re-use possible which is not discovered. One possible explanation for this may be that the form of classification used, the language, may either be too restrictive, or, that there do not exist appropriate mapping's from the classification language to the problem space in which the component could be used.

Part of this could be due the absence of suitable "Universes of Discourse", i. Another could be that specifications may imply their inverses, or, that common functions can be deduced by expanding a specification, making use of the "Universe of Discourse" that is valid for that point in the design.

MTech Network Simulator 2 Projects. MTech Network Attack Projects. MTech Computer Science Projects. MTech Computer Engineering Projects. Embedded Systems Projects for ece. Computer Science MTech Projects. Computer Engineering ns2 Projects. Software Projects For Students. IT Final Year Projects.


Main Topics

Privacy Policy

Honours and Masters by Coursework thesis topics for students interested in further study in computer science, IT, computer engineering and games technology.

Privacy FAQs

Computer Science and Engineering: Theses, Dissertations, and Student Research PhD candidates: You are welcome and encouraged to deposit your dissertation here, but be aware that 1) it is optional, not required (the ProQuest deposit is required); and.

About Our Ads

Apr 22,  · 13 Of 's Hottest Topics In Computer Science Research. closing this gap is one of the grand challenged in modern science and engineering. DNA analysis and genetics have also become computer. Research Topics The department's more than 30 faculty members conduct a broad scope of research within the fields of materials science and engineering and welding engineering. Biomaterials.

Cookie Info

Thesis Topics for Computer Engineering that Deals with Inventions of New Software Professors who are working in the universities across the world and teaching computer engineering to the students have suggested these topics for thesis as Student Assignment Help/5(K). Oct 06,  · The best way to choose the thesis topic for a computer science Master’s degree is to have a thorough knowledge of the subjects which you have studied till date. Choose topics which are a trending in research such as machine learning, big data analytics, computer networks, network security, cloud computing and so on.