DSP algorithm for music-less audio stream generation
In this paper we investigate the problem of separation of human voice from a mixture of voice and different music instruments. The human voice may be a part of singing voice in a song or it may be a part of some news broadcasted by a channel and it contains background music. The final outcome of this work is a file containing only vocals. In our approach we consider stereo audio for separation. We process the signal in time frequency domain. In our method of blind source separation we processed the input stereo audio file in the form of frames, windowed them and then applied discrete Fourier transform (DFT) on signal. Then the signal is masked for de-mixing purpose using time frequency filters and non-zero DFT coefficients that are estimated as a part vocals are selected and signal is reconstructed by overlap add (OLA) method to get the final output signal containing only vocals.
Please Login using your Registered Email ID and Password to download this PDF.
This article is not included in your organization's subscription.The requested content cannot be downloaded.Please contact Journal office.Click the Close button to further process.
[PDF]
More Accurate Value Prediction using Neural methods
Data dependencies between instructions have traditionally limited the ability of processors to execute instructions in parallel. Data value predictors are used to overcome these dependencies by guessing the outcomes of instructions in a program. Because mispredictions can result in a significant performance decrease, most data value predictors include a confidence estimator that indicates whether a prediction should be used or not.Much research has been done recently in the area of data value prediction as a means of overcoming these data dependencies [7,8,9,10,11,17,18,20,21]. The goal of data value prediction is to guess the outcome of an instruction before the instruction is actually executed, allowing future instructions that depend on its outcome to be executed sooner. Data value predictors are usually designed to look for patterns among the data produced in repeated iterations of static instructions. Accurate prediction can be attained when the repeated outcomes of a particular instruction follow easily discernable patterns.This paper presents a global approach to confidence estimation in which the prediction accuracy of previous instructions is used to estimate the confidence of the current prediction. Data value prediction is done using perceptrons and Support Vector Machines are used to identify which past instructions affect the accuracy of a prediction and to decide based on their results whether the prediction is likely to be correct or not .
Please Login using your Registered Email ID and Password to download this PDF.
This article is not included in your organization's subscription.The requested content cannot be downloaded.Please contact Journal office.Click the Close button to further process.
[PDF]
Performance analysis of TCP and TCP- Fin Ad-Hoc network
Many applications use TCP as the transport layer for reliable data transfer for wireless connections to integrate seamlessly into the Internet. Some of the assumptions made during the design of traditional TCP may not be suitable for an infrastructure-less network environment, because TCP invokes the congestion control mechanism even if the packet loss is due to the link failure. On the other hand TCP-F that is able to distinguish link failure from congestion through feedback from the intermediate nodes and takes appropriate action. This paper compares the performance of TCP-F with traditional TCP through simulation.
Please Login using your Registered Email ID and Password to download this PDF.
This article is not included in your organization's subscription.The requested content cannot be downloaded.Please contact Journal office.Click the Close button to further process.
[PDF]
Sensor data analysis and management in wireless sensor networks
Harvesting the benefits of a sensor-rich world presents many data analysis and management challenges. Recent advances in research and industry aim to address these challenges. Modern sensors and information technologies make it possible to continuously collect sensor data, which is typically obtained as real-time and real valued numerical data. Examples include vehicles driving around in cities or a power plant generating electricity, which can be equipped with numerous sensors that produce data from moment to moment. Though the data gathering systems are becoming relatively mature, a lot of innovative research needs to be done on knowledge discovery from these huge repositories of data. The data management techniques and analysis methods are required to process the increasing volumes of historical and live streaming data sources simultaneously. Analysts need improved techniques are needed to reduce an analyst’s decision response time and to enable more intelligent and immediate situation awareness. Faster analysis of disparate information sources may be achieved by providing a system that allows analysts to pose integrated queries on diverse data sources without losing data provenance. This paper proposed to develop abstractions that make it easy for users and application developers to continuously apply statistical modeling tools to streaming sensor data. Such statistical models can be used for data cleaning, prediction, interpolation, anomaly detection and for inferring hidden variables from the data, thus addressing many of the challenges in analysis and managing sensor data. Current archive data and streaming data querying techniques are insufficient by themselves to harmonize sensor inputs from large volumes of data. These two distinct architectures (push versus pull) have yet to be combined to meet the demands of a data-centric world. The input of sensor streaming data from multiple sensor types further complicates the problem.
Please Login using your Registered Email ID and Password to download this PDF.
This article is not included in your organization's subscription.The requested content cannot be downloaded.Please contact Journal office.Click the Close button to further process.
[PDF]
Semantic web modeling of a high school’s information system along with sparql queries
In the first part of this work we will present the modelling of a high school information system with the use of WebProtege. System ontologies and class properties will be presented. In the second part we will present an introduction for SPARQL and examples of queries that were made, with the results returned to us.
Please Login using your Registered Email ID and Password to download this PDF.
This article is not included in your organization's subscription.The requested content cannot be downloaded.Please contact Journal office.Click the Close button to further process.
[PDF]
Shape Classification Using Shape Context and Dynamic Programming
The suggested algorithm for shape classification described in this paper is based on several steps. The algorithm analyzes the contour of pairs of shapes. Their contours are recovered and represented by a pair of N points. Given two points pi and qj from the two shapes the cost of their matching is evaluated by using the shape context and by using dynamic programming, the best matching between the point sets is obtained. Dynamic programming not only recovers the best matching, but also identifies occlusions, i.e. points in the two shapes which cannot be properly matched. From dynamic programming we obtain the minimum cost for matching pairs of shapes. After computing pair wise minimum cost between input and all reference shapes in the given database, we sort based on the minimum cost in ascending order and select first two shapes to check if it belongs to the input class. If it belongs to the input class, then we say that the shape is classified as a perfect match, else it is a mismatch. The algorithm has been tested on a set of shape databases like Kimia-25, Kimia-99, Kimia-216 and MPEG-7 providing good performances for shape classification.
Please Login using your Registered Email ID and Password to download this PDF.
This article is not included in your organization's subscription.The requested content cannot be downloaded.Please contact Journal office.Click the Close button to further process.
[PDF]
Spectral Clustering in Data mining with the case study of Customer Relationship Management
In Data mining world, Lead generation is a data searching technique which is used to collect relevant customer information (leads), one of the examples for this techniques is contextual advertising. You might have noticed as soon as you open google site to search something, it displays unique advertisement or sponsored link along with search results. This sponsored link is typically based on search text, user logged in (ex: google user), location, browser to name a few. This type of preparing customized advertisement and sponsored links is called as Contextual advertisement and this technique is an example for Lead generation. It is an easy and painless way of attracting people/users and cultivating prospective customers out of them. The key idea of this paper is to bring out the importance of data mining in the field of CRM and also to explain the benefits of M-Clustering algorithm which we propose for data mining which proves to be efficient as it uses clustering approach compared to k-means algorithm. Also, there is a comparison with Newman’s algorithm where the significance is highlighted in terms of training set and historical data handling in M-Clustering.
Please Login using your Registered Email ID and Password to download this PDF.
This article is not included in your organization's subscription.The requested content cannot be downloaded.Please contact Journal office.Click the Close button to further process.
[PDF]
Design and implementation of testing tool for code smell rectification using C-Mean algorithm
A code smell is a hint or the description of a symptom that something has gone wrong somewhere in your code. These are commonly occurring patterns in source code that indicate poor programming practice or code decay. The presence of code smells can have a severe impact on the quality of a program, i.e. making system more complex, less understandable and cause maintainability problem. Herein, an automated tool have been developed that can rectify code smells present in the source code written in java, C# and C++ to support quality assurance of software. Also, it computes complexity, total memory utilized/wastage, maintainability index of software. In this research paper an approach used for the design and implementation of testing tool for code smell rectification is discussed and is validated on three different projects.
Please Login using your Registered Email ID and Password to download this PDF.
This article is not included in your organization's subscription.The requested content cannot be downloaded.Please contact Journal office.Click the Close button to further process.
[PDF]
Human action recognition to understand hand signals for traffic surveillance
Gesture Recognition plays a vital role in computer vision. The purpose of this survey is to provide a detailed overview and categories of current issues and trends. The recognition of human hand gesture movement can be performed at various levels of abstraction. Many applications and algorithms were discussed here with the explanation of system recognition framework. General overview of an action and its various applications were discussed in this paper. Most of the recognition system uses the data sets like KTH, Weizmann. Some other data sets were used by the action recognition system. In this paper, various approaches for image representation, feature extraction, activity detection and action recognition were also discussed.
Please Login using your Registered Email ID and Password to download this PDF.
This article is not included in your organization's subscription.The requested content cannot be downloaded.Please contact Journal office.Click the Close button to further process.
[PDF]
Iterative Software Process Based Collaboration Model for Software Stakeholders
Software engineering is well known for its significance on software development complexities minimization. Due to software engineering significance researches were carried out to improve software engineering practices. However, there are some identified problems that lead to software development complexities like the lack of understandable collaboration or communication between software stakeholders during software development. And to this identified problem a research was carried out to solve this problem by proposing a collaboration model for software stakeholders’ collaboration during software development. However, the model proposed was restricted to a waterfall software process model. This study used segment of the framework used for developing the waterfall software process based collaboration model to develop an iterative software process based collaboration model for software stakeholders during software development. And this study proposed model will help minimize the problem of lack of understandable collaboration between software stakeholders during software development
Please Login using your Registered Email ID and Password to download this PDF.
This article is not included in your organization's subscription.The requested content cannot be downloaded.Please contact Journal office.Click the Close button to further process.
[PDF]