Project details for FEAST

Logo FEAST 1.00

by apocock - February 13, 2012, 19:00:29 CET [ Project Homepage BibTeX BibTeX for corresponding Paper Download ]

view (27 today), download ( 6 today ), 25 comments, 1 subscription

OverallWhole StarWhole StarWhole StarWhole StarWhole Star
FeaturesWhole StarWhole StarWhole StarWhole StarWhole Star
UsabilityWhole StarWhole StarWhole StarWhole StarWhole Star
DocumentationWhole StarWhole StarWhole StarWhole StarWhole Star
(based on 1 vote)
Description:

FEAST provides a set of implementations of information theoretic filter feature selection algorithms, and an implementation of RELIEF for comparison purposes.

This toolbox accompanies the paper "Conditional Likelihood Maximisation: A Unifying Framework for Information Theoretic Feature Selection", JMLR 2012 (link).

All information theoretic algorithms are implemented in C and depend upon the supplied MIToolbox (available separately at mloss here). These algorithms have a MATLAB interface wrapper, which also includes the two algorithms implemented directly in MATLAB (FCBF and RELIEF).

Contains implementations of: mim, mrmr, mifs, cmim, jmi, disr, cife, icap, condred, cmi, relief, fcbf, betagamma

MATLAB Example (using "data" as our feature matrix, and "labels" as the class label vector):

size(data) ans = (569,30) %% denoting 569 examples, and 30 features

selectedIndices = feast('jmi',5,data,labels) %% selecting the top 5 features using the jmi algorithm

selectedIndices =

28 21 8 27 23

selectedIndices = feast('mrmr',10,data,labels) %% selecting the top 10 features using the mrmr algorithm

selectedIndices =

28 24 22 8 27 21 29 4 7 25

selectedIndices = feast('mifs',5,data,labels,0.7) %% selecting the top 5 features using the mifs algorithm with beta = 0.7

selectedIndices =

28 24 22 20 29

If you wish to use MIM in a C program you can use the BetaGamma function with Beta = 0, Gamma = 0, as this is equivalent to MIM (but slower than the other implementation). MIToolbox is required to compile these algorithms, and these implementations supercede the example implementations given in that package (they have more robust behaviour when used with unexpected inputs).

Please cite this work by using the corresponding paper BibTeX.

Changes to previous version:

Initial Announcement on mloss.org.

BibTeX Entry: Download
Corresponding Paper BibTeX Entry: Download
URL: Project Homepage
Supported Operating Systems: Linux, Macosx, Windows
Data Formats: Matlab
Tags: Matlab, Feature Selection, Feature Ranking, Mutual Information, Jmlr
Archive: download here

Other available revisons

Version Changelog Date
1.1.1
  • Bug fixes to memory management.
  • Compatibility changes for PyFeast python wrapper (note the C library now returns feature indices starting from 0, the Matlab wrapper still returns indices starting from 1).
  • Added C version of MIM.
  • Updated internal version of MIToolbox.
June 30, 2014, 01:30:23
1.00

Initial Announcement on mloss.org.

February 13, 2012, 19:00:29

Comments

Ryan Brauchler (on February 5, 2014, 22:57:35)

First off, I'm a huge fan of this toolbox, and generally, it works great.

I am implementing it to select features from an extremely high dimensional data set (~ 1,000 x 500,000). To do so, I placed this function into a loop where I break the initial matrix (which is held in a memmapfile to save memory) into N smaller bits and then feed that into FEAST and return the columns of the chosen features. The output of this loop is simply a 1xN cell array where each cell in the array is a list of the columns chosen by the algorithm. I am just using the mrmr algorithm in the toolbox.

After a few iterations of this loop, however, I always get an out of memory error from Matlab. The toolbox works great in isolation, but when you start using it multiple times in a loop, it seems there is a memory leak in the MEX file for mrmr, and I would imagine its also in the others. Just wanted to bring the issue to your attention. I am going to focus on fixing it myself, but figured it might be faster to put it in the hands of the experts who wrote it.

Thanks!

Adam Pocock (on February 6, 2014, 02:40:32)

Hi Ryan,

Do you know if the error occurs when using FEAST from C/C++? I've had issues with Matlab before where it doesn't properly free memory allocated in MEX files (even though mxFree was called) which eventually leads to memory fragmentation and then subsequent MEX calls fail.

What version of Matlab are you using, and how much RAM is there in the machine?

Adam

Ryan Brauchler (on February 6, 2014, 04:16:54)

Hi Adam.

Thank you so much for the prompt response.

I have yet to try running the code outside Matlab, though I could attempt to do so tomorrow. It very well could be the case that it's a Matlab fragmentation error. If that's in fact the case is there a workaround that overcomes this issue?

I'm using Matlab R2013a with 32GB of RAM in the machine. When the array is broken up into smaller pieces, each piece is less than 3MB in size, so that shouldn't end up being an issue of physical memory limitations.

Thanks,

Ryan

Adam Pocock (on February 6, 2014, 05:22:11)

Hi Ryan,

One test you could do is see if Feast has the same issue (in Matlab) if you just rerun it multiple times on the same partition of the dataset. That way Matlab shouldn't do any more allocation beyond what is necessary to store the features (though it will still fragment the RAM a bit). We tested it fairly heavily by running it repeatedly across multiple datasets in the same Matlab instance, but we didn't have anything quite as large as the one you are using.

Drop me an email at apocock at cs.man.ac.uk and we can look at this in a little more detail.

Thanks,

Adam

YS L (on July 11, 2014, 03:27:02)

Hi Admin,

I just find this toolbox and am going to use it for my project. However, I still met the problem of memory. I use the functions like :"[selectedFeatures] = feast('mrmr',200,X,Y) ", where X is a (# sample) × (# feature) matrix and Y is a (# sample) × 1 binary vector (-1,1). When I call the feast function, MATLAB report error "Out of memory".

I downloaded the newest version of the toolbox. My matlab version is R2012a and memory size is 32 Gb. The data is quite small around 200 × 300. Is there any suggestion to fix the problem?

Thanks a lot !

Adam Pocock (on July 11, 2014, 04:46:35)

Hi,

There are two preprocessing steps you should do before using the toolbox.

  • Ensure the data has been discretised into bins.
  • Transform the data by relabelling it (if a feature has value {1,100,1000} convert it to {1,2,3}).

This should fix any out of memory errors for small datasets.

Adam

YS L (on July 11, 2014, 05:17:59)

Hi Adam,

That works! Thanks a lot

A Smith (on August 5, 2014, 03:08:58)

Adam,

I'm having some trouble using FCBF. Inside the "SU" subfunction, there seems to be two functions h() and mi() that are undefined. Did I not install the Toolbox correctly?

Adam Pocock (on August 5, 2014, 04:03:14)

Looks like I forgot to add a bit to my setup instructions. FCBF uses the Matlab interface to MIToolbox (unlike the rest of FEAST which compiles in the C code), so you need to add the MIToolbox folder to your Matlab path. MIToolbox contains the h.m and mi.m files which FCBF uses.

Adam

ahmed hamdy (on September 22, 2014, 16:05:11)

First of all, let me thank you for your perfect work. I'm intending to use your toolbox (FEAST) I'm using matlab 12 and i added the feast and the MIToolbox to the matlab path. then i tried index = feast('jmi',10,horse,Diagnosis) where horse is the feature matrix(299x26) and Diagnosis is the target concept (299x1)

but i got the following error

undefined function 'MIToolbxMex' for input argument of type 'double' Error in feast(line 70)

selectedFeatures = FSToolboxMex(4, numToSelect, data, labels);

please help me Ahmed Hamdy.

Adam Pocock (on September 22, 2014, 16:19:20)

Hi Ahmed,

You need to compile MIToolbox as well as FEAST. Run CompileMIToolbox.m in the MIToolbox directory.

Adam

ahmed hamdy (on September 22, 2014, 16:35:50)

Thanks a lot sir Adam. when running compileMIToolbox.m

i got the following error

Error using mex (line 206) Unable to complete successfully.

Error in CompileMIToolbox (line 3) mex MIToolboxMex.c MutualInformation.c Entropy.c CalculateProbability.c ArrayOperations.c

Adam Pocock (on September 22, 2014, 16:42:30)

Hi Ahmed,

Drop me an email (apocock at cs dot man dot ac dot uk). It looks like mex isn't setup correctly, so could you send me your OS (plus if its 32 or 64 bit), matlab version, and C compiler version?

Adam

Anupam D (on October 7, 2014, 22:55:39)

Hi Adam, I'm trying to use FEAST toolbox for feature selection. I have a relatively small data set right now, but I'm getting the following error in Matlab "Out of Memory". I tried using both Matlab R014a and R2013.

My data size is 130x50 (samples x features). The feature values are different floating point numbers ranging from [1000000 to -1]

Is there any restrictions on the range of values that can be used to represent a feature?

Adam Pocock (on October 8, 2014, 00:20:20)

Hi Anupam,

The short answer is yes, you should use a small range of values.

FEAST uses the discrete mutual information and will round your floating point values to the lowest integer. It's better to do this yourself before using the tool as then you get more control over it. After you've done the discretisation you should map the discrete values into a smaller range (e.g. if a feature has value {1,100,1000} convert it to {1,2,3}). FEAST uses a relatively naive mapping function as it is written for maximum compatibility, so smaller values allocate less memory. The discrete mutual information doesn't take into account the value of a variable so this transformation doesn't change the result.

You should discretise it into a small number of bins given you only have 130 samples, the mutual information behaves poorly when the number of states/bins is approximately the same as the number of samples.

Adam

Anupam D (on October 8, 2014, 00:49:41)

Thanks Adam, for the explanation. Two more questions for u :)

  1. Could you clarify what you are referring by discretization? The 130 samples are from 13 classes.

  2. So would normalizing the features values to [0,1] work? I see it works (doesn't give the 'Out of memory error') but what kind of impact does that have on the selection algorithm?

-Anupam

Adam Pocock (on October 8, 2014, 01:00:33)

Hi Anupam,

By discretisation I mean mapping the floating point values onto a small set of integer values. For example if you have a feature in the range [1, 100], then you could map all values in the range [1,10] to 1, [11,20] -> 2 ... [91,100] -> 10.

I would not recommend normalizing the features to [0,1] as then there are only two states in the discrete mi calculation (0 and 1) which loses a lot of information. The choice of the number of bins is a trade off between the amount of information available for learning, and the estimation error in the mutual information. Small numbers of states have low amounts of information and low estimation error. Large numbers of states have high information and high estimation error. In my publications we usually used 10 bins, and that seemed to work well across a variety of datasets. If you had many more samples then you could try using more bins, but wtih 130 samples I would recommendn't more than 10. If you want to read about our experimental setup its in the "Conditional Likelihood Maximisation: A Unifying..." paper in the bibtex on this page.

Adam

Anupam D (on October 8, 2014, 01:06:35)

Great! Thanks for the quick response.

-Anupam

Guofa Li (on October 10, 2014, 05:23:47)

I'd like to say that this is a great toolbox! Using the toolbox, I selected the best features to recognize the different categories. Here is a question I want to know. Is there a way to know the correct categorization rate using the selected features?

Guofa Li (on October 10, 2014, 05:37:48)

I read the questions from Anupam and the answers from Adam. Here is another question. If I do not map the values in ranges (e.g.[1,10] to 1, [11,20] -> 2 ... [91,100] -> 10 ), what will happen?

I run the feast function and got the selected indices. But the values I used were all double numbers (e.g. 75.7702, 98.4044) and I got the selected indices successfully. Is there anything wrong with the results?

Adam Pocock (on October 11, 2014, 00:24:00)

Hi Guofa,

I'm not quite sure what your first question is asking. Do you want to know the classification error rate using the selected features, or some other property? If you want to find the classification error rate then you will need to train a classifier on the training dataset using the selected features. FEAST uses filter algorithms which measure the quality of a feature set in a different way than the classification error.

The feast function will run in many cases without discretisation, though the selected features might not have any meaning. There isn't any validation in the code to check to see if the mutual information values used are calculated accurately (and this is a difficult problem to detect in general). If you only have a small number of floating point values, (i.e. many data points all had the value 75.7702) then the result will be sound, however in general it is better to discretise the values. If you don't remap the values into ranges then you are likely to run into the out of memory errors reported in the other comments, though the remapping doesn't affect the quality of the results.

Basically, discretise your dataset into a smallish number of bins before running FEAST. Otherwise the answers that are generated are not reliable.

Thanks,

Adam

Joshua Juen (on October 12, 2014, 06:33:33)

Hi,

I am having a problem getting the same results with the C (and Python bindings) as I am with the Matlab version of the library. I am using as input the test data set provided by PyFEAST (digit.txt). I ran jmi with the top 15 input vectors through the python test and got:

[6.0, 11.0, 14.0, 19.0, 22.0, 27.0, 30.0, 35.0, 38.0, 43.0, 46.0, 51.0, 54.0, 59.0, 62.0]

I also ran a test on the same data in the Matlab version running on Windows and got:

14, 21, 22, 27, 28, 30, 35, 37, 38, 43, 44, 45, 51, 59, 62

I was confused by the difference in output with the same input data, same method so I wrote a program in straight C to use the C library directly (No Python Bindings). When I call the C function jmi with the same input data I get:

27,19,35,11,51,59,14,22,38,54,62,46,43,30,6

This clearly matches the output from the python binding (when sorted) but not Matlab. To make sure I was not having a problem with Matlab on Windows, I also installed and compiled on Matlab running on Ubuntu. I ran with the same input data again and got:

14, 21, 22, 27, 28, 30, 35, 37, 38, 43, 44, 45, 51, 59, 62

Am I doing something wrong here? Why is the output different on my Matlab compilations than my C (and PyFeast) implementations?

Thanks Josh

Joshua Juen (on October 12, 2014, 16:28:53)

Upon further investigation, it looks like the Matlab code feeds the data into jmi in column-major order and the C code examples (and python bindings) feed them in using row-major order. That seems to be the difference.

Should jmi take the data in row-major or column-major order?

Based on the code, it looks like it should be column-major order, but I wanted to confirm before moving ahead. In that case, the sample input files in pyFEAST are incorrect and I will correspond with the author. I will also fix my C implementation as well.

Thanks Josh

Adam Pocock (on October 12, 2014, 17:32:18)

Hi Joshua,

FEAST expects things in column major order. I haven't looked too closely at the python bindings, I had assumed they also worked in column major. One thing to be careful of is that the ordering of the feature indices has meaning. You should get exactly the same result in the same order out of FEAST no matter what interface you use.

Thanks,

Adam

Adam Pocock (on October 12, 2014, 23:56:24)

Hi Joshua,

I had a look at PyFeast, and it's not the sample data file that is incorrect, it's just missing the order="F" flag when the numpy array is created (line 28). From there you get the same results out of MATLAB and Python (once you've subtracted 1 from the MATLAB results as it indexes from 1). I made an issue on their Github, and updated the documentation for FEAST on my Github to reflect that it requires column-major matrices.

I think I must have removed the comments that noted this in an earlier pre-release version.

Thanks,

Adam

Leave a comment

You must be logged in to post comments.