There are some definite distinctions between Text Search and Visual Search. One of the main ones is that of the expected result set.
Expected Results with Text Search
With a text search, if you enter in the word Tiger as the search term, you might get 8100 responses. However, there are two problems. First, you will not get the 2000 images of tigers where the person who logged in the images entered Tigers into the keywords or filename. Second, you WILL get the 8000 images of tiger lilies, tiger monarch butterflies, Tiger Woods, tiger sharks, tiger attack helicopters, tiger planes, tiger cages, tiger teeth
the list goes on and on. So you will receive 8100 images, only 100 of which you want, and missing the other 2000 that you want, and you will have to page through them one at a time to see every one of them because the ones you really want could be at the end of the 8100 images. There is no way to get closer to the type of image you want, getting rid of the other non-related images.
Expected Results with Visual Search
Searching by using a true visual search engine like eVe is a bit different. You generally begin with an initial text search. You will then select, in a very intuitive manner, an image that is similar to what you are looking for. From there you will iteratively narrow down the images displayed to you by selecting aspects about the image (or a different, closer image) that is what you want to see more of in the next set of displayed responses. Thus, instead of having to page sequentially through the entire dataset that has been returned, you continue to filter closer and closer to the image that you are looking for.
You will notice that seemingly odd, non-connected images will occasionally pop up in the return results, closer to the top than other images that seem more similar to the target image. These are called False Positives, where the search engine has reached a bit outside of the search criteria to try to insure that it gets as many of the relevant images that it can.
Learn how the computer sees
Some of the expectation in searching needs to be modulated by thinking what the computer sees as similar to the target image, not exactly what I want in the next set of returned images. In a sense, with the help of the object map, you need to get a feel for how the eVe engine has analyzed the images, and how it looks at them during retrieval, depending on what search criteria you have entered. Some of this is logic, using the guidelines from above, some of this is intuition and practice just playing around.