Sphinx 0.9.8 reference manual

Free open-source SQL full-text search engine


Table of Contents

1. Introduction
1.1. About
1.2. Sphinx features
1.3. Where to get Sphinx
1.4. License
1.5. Author and contributors
1.6. History
2. Installation
2.1. Supported systems
2.2. Required tools
2.3. Installing Sphinx
2.4. Known installation issues
2.5. Quick Sphinx usage tour
3. Indexing
3.1. Data sources
3.2. Attributes
3.3. Indexes
3.4. Restrictions on the source data
3.5. Charsets, case folding, and translation tables
3.6. SQL data sources (MySQL, PostgreSQL)
3.7. XMLpipe data source
3.8. Live index updates
4. Searching
4.1. Matching modes
4.2. Boolean query syntax
4.3. Extended query syntax
4.4. Weighting
4.5. Sorting modes
4.6. Grouping (clustering) search results
4.7. Distributed searching
4.8. searchd query log format
5. API reference
5.1. General API functions
5.1.1. GetLastError
5.1.2. GetLastWarning
5.1.3. SetServer
5.1.4. SetArrayResult
5.2. General query settings
5.2.1. SetLimits
5.2.2. SetMaxQueryTime
5.3. Full-text search query settings
5.3.1. SetMatchMode
5.3.2. SetRankingMode
5.3.3. SetSortMode
5.3.4. SetWeights
5.3.5. SetFieldWeights
5.3.6. SetIndexWeights
5.4. Result set filtering settings
5.4.1. SetIDRange
5.4.2. SetFilter
5.4.3. SetFilterRange
5.4.4. SetFilterFloatRange
5.4.5. SetGeoAnchor
5.5. GROUP BY settings
5.5.1. SetGroupBy
5.5.2. SetGroupDistinct
5.6. Querying
5.6.1. Query
5.6.2. AddQuery
5.6.3. RunQueries
5.6.4. ResetFilters
5.6.5. ResetGroupBy
5.7. Additional functionality
5.7.1. BuildExcerpts
5.7.2. UpdateAttributes
6. MySQL storage engine (SphinxSE)
6.1. SphinxSE overview
6.2. Installing SphinxSE
6.2.1. Compiling MySQL 5.0.x with SphinxSE
6.2.2. Compiling MySQL 5.1.x with SphinxSE
6.2.3. Checking SphinxSE installation
6.3. Using SphinxSE
7. Reporting bugs
8. sphinx.conf options reference
8.1. Data source configuration options
8.1.1. type
8.1.2. strip_html
8.1.3. index_html_attrs
8.1.4. sql_host
8.1.5. sql_port
8.1.6. sql_user
8.1.7. sql_pass
8.1.8. sql_db
8.1.9. sql_sock
8.1.10. sql_query_pre
8.1.11. sql_query
8.1.12. sql_query_range
8.1.13. sql_range_step
8.1.14. sql_group_column
8.1.15. sql_date_column
8.1.16. sql_str2ordinal_column
8.1.17. sql_query_post
8.1.18. sql_query_post_index
8.1.19. sql_query_info
8.1.20. xmlpipe_command
8.2. searchd program configuration options
8.2.1. seamless_rotate
A. Sphinx revision history

1. Introduction

1.1. About

Sphinx is a full-text search engine, distributed under GPL version 2. Commercial licensing (eg. for embedded use) is also available upon request.

Generally, it's a standalone search engine, meant to provide fast, size-efficient and relevant full-text search functions to other applications. Sphinx was specially designed to integrate well with SQL databases and scripting languages.

Currently built-in data source drivers support fetching data either via direct connection to MySQL, or PostgreSQL, or from a pipe in a custom XML format. Adding new drivers (eg. to natively support some other DBMSes) is designed to be as easy as possible.

Search API is natively ported to PHP, Python, Perl, Ruby, Java, and also available as a pluggable MySQL storage engine. API is very lightweight so porting it to new language is known to take a few hours.

As for the name, Sphinx is an acronym which is officially decoded as SQL Phrase Index. Yes, I know about CMU's Sphinx project.

1.2. Sphinx features

  • high indexing speed (upto 10 MB/sec on modern CPUs);
  • high search speed (avg query is under 0.1 sec on 2-4 GB text collections);
  • high scalability (upto 100 GB of text, upto 100 M documents on a single CPU);
  • provides good relevance ranking through combination of phrase proximity ranking and statistical (BM25) ranking;
  • provides distributed searching capabilities;
  • provides document exceprts generation;
  • provides searching from within MySQL through pluggable storage engine;
  • supports boolean, phrase, and word proximity queries;
  • supports multiple full-text fields per document (upto 32 by default);
  • supports multiple additional attributes per document (ie. groups, timestamps, etc);
  • supports stopwords;
  • supports both single-byte encodings and UTF-8;
  • supports English stemming, Russian stemming, and Soundex for morphology;
  • supports MySQL natively (MyISAM and InnoDB tables are both supported);
  • supports PostgreSQL natively.

1.3. Where to get Sphinx

Sphinx is available through its official Web site at http://www.sphinxsearch.com/.

Currently, Sphinx distribution tarball includes the following software:

  • indexer: an utility which creates fulltext indexes;
  • search: a simple command-line (CLI) test utility which searches through fulltext indexes;
  • searchd: a daemon which enables external software (eg. Web applications) to search through fulltext indexes;
  • sphinxapi: a set of searchd client API libraries for popular Web scripting languages (PHP, Python, Perl, Ruby).

1.4. License

This program is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 2 of the License, or (at your option) any later version. See COPYING file for details.

This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details.

You should have received a copy of the GNU General Public License along with this program; if not, write to the Free Software Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA

If you don't want to be bound by GNU GPL terms (for instance, if you would like to embed Sphinx in your software, but would not like to disclose its source code), please contact the author to obtain a commercial license.

1.5. Author and contributors

Author

Sphinx initial author and current primary developer is:

Contributors

People who contributed to Sphinx and their contributions (in no particular order) are:

  • Robert "coredev" Bengtsson (Sweden), initial version of PostgreSQL data source;
  • Len Kranendonk, Perl API
  • Dmytro Shteflyuk, Ruby API

Many other people have contributed ideas, bug reports, fixes, etc. Thank you!

1.6. History

Sphinx development was started back in 2001, because I didn't manage to find an acceptable search solution (for a database driven Web site) which would meet my requirements. Actually, each and every important aspect was a problem:

  • search quality (ie. good relevance)
    • statistical ranking methods performed rather bad, especially on large collections of small documents (forums, blogs, etc)
  • search speed
    • especially if searching for phrases which contain stopwords, as in "to be or not to be"
  • moderate disk and CPU requirements when indexing
    • important in shared hosting enivronment, not to mention the indexing speed.

Despite the amount of time passed and numerous improvements made in the other solutions, there's still no solution which I personally would be eager to migrate to.

Considering that and a lot of positive feedback received from Sphinx users during last years, the obvious decision is to continue developing Sphinx (and, eventually, to take over the world).

2. Installation

2.1. Supported systems

Most modern UNIX systems with a C++ compiler should be able to compile and run Sphinx without any modifications.

Currently known systems Sphinx has been successfully running on are:

  • Linux 2.4.x, 2.6.x (various distributions)
  • Windows 2000, XP
  • FreeBSD 4.x, 5.x, 6.x
  • NetBSD 1.6, 3.0
  • Solaris 9, 11
  • Mac OS X

CPU architectures known to work include X86, X86-64, SPARC64.

I hope Sphinx will work on other Unix platforms as well. If the platform you run Sphinx on is not in this list, please do report it.

At the moment, Windows version of Sphinx is not intended to be used in production, but rather for testing and debugging only. Two most prominent issues are missing concurrent queries support (client queries are stacked on TCP connection level instead), and missing index data rotation support. There are succesful production installations which workaround these issues. However, running high-volume search service under Windows is still not recommended.

2.2. Required tools

On UNIX, you will need the following tools to build and install Sphinx:

  • a working C++ compiler. GNU gcc is known to work.
  • a good make program. GNU make is known to work.

On Windows, you will need Microsoft Visual C/C++ Studio .NET 2003 or 2005. Other compilers/environments will probably work as well, but for the time being, you will have to build makefile (or other environment specific project files) manually.

2.3. Installing Sphinx

  1. Extract everything from the distribution tarball (haven't you already?) and go to the sphinx subdirectory:

    $ tar xzvf sphinx-0.9.7.tar.gz
    $ cd sphinx

  2. Run the configuration program:

    $ ./configure

    There's a number of options to configure. The complete listing may be obtained by using --help switch. The most important ones are:

    • --prefix, which specifies where to install Sphinx;
    • --with-mysql, which specifies where to look for MySQL include and library files, if auto-detection fails;
    • --with-pgsql, which specifies where to look for PostgreSQL include and library files.

  3. Build the binaries:

    $ make

  4. Install the binaries in the directory of your choice:

    $ make install

2.4. Known installation issues

If configure fails to locate MySQL headers and/or libraries, try checking for and installing mysql-devel package. On some systems, it is not installed by default.

If make fails with a message which look like

/bin/sh: g++: command not found
make[1]: *** [libsphinx_a-sphinx.o] Error 127

try checking for and installing gcc-c++ package.

If you are getting compile-time errors which look like

sphinx.cpp:67: error: invalid application of `sizeof' to
    incomplete type `Private::SizeError<false>'

this means that some compile-time type size check failed. The most probable reason is that off_t type is less than 64-bit on your system. As a quick hack, you can edit sphinx.h and replace off_t with DWORD in a typedef for SphOffset_t, but note that this will prohibit you from using full-text indexes larger than 2 GB. Even if the hack helps, please report such issues, providing the exact error message and compiler/OS details, so I could properly fix them in next releases.

If you keep getting any other error, or the suggestions above do not seem to help you, please don't hesitate to contact me.

2.5. Quick Sphinx usage tour

All the example commands below assume that you installed Sphinx in /usr/local/sphinx.

To use Sphinx, you will need to:

  1. Create a configuration file.

    Default configuration file name is sphinx.conf. All Sphinx programs look for this file in current working directory by default.

    Sample configuration file, sphinx.conf.dist, which has all the options documented, is created by configure. Copy and edit that sample file to make your own configuration:

    $ cd /usr/local/sphinx/etc
    $ cp sphinx.conf.dist sphinx.conf
    $ vi sphinx.conf

    Sample configuration file is setup to index documents table from MySQL database test; so there's example.sql sample data file to populate that table with a few documents for testing purposes:

    $ mysql -u test < /usr/local/sphinx/etc/example.sql

  2. Run the indexer to create full-text index from your data:

    $ cd /usr/local/sphinx/etc
    $ /usr/local/sphinx/bin/indexer

  3. Query your newly created index!

To query the index from command line, use search utility:

$ cd /usr/local/sphinx/etc
$ /usr/local/sphinx/bin/search test

To query the index from your PHP scripts, you need to:

  1. Run the search daemon which your script will talk to:

    $ cd /usr/local/sphinx/etc
    $ /usr/local/sphinx/bin/searchd

  2. Run the attached PHP API test script (to ensure that the daemon was succesfully started and is ready to serve the queries):

    $ cd sphinx/api
    $ php test.php test

  3. Include the API (it's located in api/sphinxapi.php) into your own scripts and use it.

Happy searching!

3. Indexing

3.1. Data sources

The data to be indexed can generally come from very different sources: SQL databases, plain text files, HTML files, mailboxes, and so on. From Sphinx point of view, the data it indexes is a set of structured documents, each of which has the same set of fields. This is biased towards SQL, where each row correspond to a document, and each column to a field.

Depending on what source Sphinx should get the data from, different code is required to fetch the data and prepare it for indexing. This code is called data source driver (or simply driver or data source for brevity).

At the time of this writing, there are drivers for MySQL and PostgreSQL databases, which can connect to the database using its native C/C++ API, run queries and fetch the data. There's also a driver called XMLpipe, which runs a specified command and reads the data from its stdout. See Section 3.7, “XMLpipe data source” section for the format description.

There can be as many sources per index as necessary. They will be sequentially processed in the very same order which was specifed in index definition. All the documents coming from those sources will be merged as if they were coming from a single source.

3.2. Attributes

It is often needed to do some additional processing of full-text search results depending not only on matching document ID and weight, but on a number of other per-document values as well. For instance, one might need to sort news search results by date and then relevance, or search through products within specified price range, or limit blog search to posts made by selected users, or group results by month.

To do that efficiently, Sphinx allows to attach a number of additional attributes to each document, and stores their values when indexing. These values may then be used to filter, sort, or group full-text matches when searching.

A good example would be a forum posts table. Assume that 'title' and 'content' fields need to be full-text searchable, but it is also needed to optionally limit searching to some author or sub-forum (ie. specific values of 'author_id' or 'forum_id'), or to sort matches by 'post_date', or to group matching posts by month of the 'post_date' and calculate per-group match counts.

This can be achieved by specifying all the mentioned columns (excluding 'title' and 'content' which are full-text fields) as attributes and then using API calls to setup filtering, sorting, and grouping. Here as an example.

Example sphinx.conf part:

...
sql_query = SELECT id, title, content, \
	author_id, forum_id, post_date FROM my_forum_posts
sql_group_column = author_id
sql_group_column = forum_id
sql_date_column = post_date
...

Example application code (in PHP):

// only search posts by author whose ID is 123
$cl->SetFilter ( "author_id", array ( 123 ) );

// only search posts in sub-forums 1, 3 and 7
$cl->SetFilter ( "forum_id", array ( 1,3,7 ) );

// sort found posts by posting date in descending order
$cl->SetSortMode ( SPH_SORT_ATTR_DESC, "post_date" );

Attributes are named. Attribute names are case insensitive.

Attributes are not full-text indexed; they are stored in the index as is.

Currently supported attribute types are:

  • 32-bit unsigned integer,
  • UNIX timestamp.

Attribute values are currently internally stored as fixed-size 4-byte values. A set of all per-document attribute values is called docinfo. Docinfos can either be

  • stored separately ("extern" storage in .spa file), or
  • attached to each occurence of document ID in full-text index data ("inline" storage in .spd file).

Externally stored docinfo is kept in RAM when searching. Thus "inline" may be the only viable option for huge (50-100+ million documents) datasets because of limited RAM size. However, for smaller datasets "extern" storage makes both indexing and searching much more efficient.

Additional search-time memory requirements for extern storage are (1+number_of_attrs)*number_of_docs*4 bytes, ie. 10 million docs with 2 groups and 1 timestamp will take (1+2+1)*10M*4 = 160 MB of RAM. This is PER DAEMON, ie. searchd will alloc 160 MB on startup, read the data and keep it shared between queries; the children will NOT allocate additional copies of this data.

3.3. Indexes

To be able to answer full-text search queries fast, Sphinx needs to build a special data structure optimized for such queries from your text data. This structure is called index; and the process of building index from text is called indexing.

Different index types are well suited for different tasks. For example, a disk-based tree-based index would be easy to update (ie. insert new documents to existing index), but rather slow to search. Therefore, Sphinx architecture allows for different index types to be implemented easily.

The only index type which is implemented in Sphinx at the moment is designed for maximum indexing and searching speed. This comes at a cost of updates being really slow; theoretically, it might be slower to update this type of index than than to reindex it from scratch. However, this very frequently could be worked around with muiltiple indexes, see Section 3.8, “Live index updates” for details.

It is planned to implement more index types, including the type which would be updateable in real time.

There can be as many indexes per configuration file as necessary. indexer utility can reindex either all of them (if --all option is specified), or a certain explicitly specified subset. searchd utility will serve all the specified indexes, and the clients can specify what indexes to search in run time.

3.4. Restrictions on the source data

There are a few different restrictions imposed on the source data which is going to be indexed by Sphinx, of which the single most important one is:

ALL DOCUMENT IDS MUST BE UNIQUE UNSIGNED NON-ZERO INTEGER NUMBERS (32-BIT OR 64-BIT, DEPENDING ON BUILD TIME SETTINGS).

If this requirement is not met, different bad things can happen. For instance, Sphinx can crash with an internal assertion while indexing; or produce strange results when searching due to conflicting IDs. Also, a 1000-pound gorilla might eventually come out of your display and start throwing barrels at you. You've been warned.

3.5. Charsets, case folding, and translation tables

When indexing some index, Sphinx fetches documents from the specified sources, splits the text into words, and does case folding so that "Abc", "ABC" and "abc" would be treated as the same word (or, to be pedantic, term).

To do that properly, Sphinx needs to know

  • what encoding is the source text in;
  • what characters are letters and what are not;
  • what letters should be folded to what letters.

This should be configured on a per-index basis using charset_type and charset_table options. With charset_type, one would specify whether the document encoding is single-byte (SBCS) or UTF-8. charset_table would then be used to specify the table which maps letter characters to their case folded versions. The characters which are not in the table are considered to be non-letters and will be treated as word separators when indexing or searching through this index.

Note that while default tables do not include space character (ASCII code 0x20, Unicode U+0020) as a letter, it's in fact perfectly legal to do so. This can be useful, for instance, for indexing tag clouds, so that space-separated word sets would index as a single search query term.

Default tables currently include English and Russian characters. Please do submit your tables for other languages!

3.6. SQL data sources (MySQL, PostgreSQL)

With all the SQL drivers, indexing generally works as follows.

Most options, such as database user/host/password, are straightforward. However, there are a few subtle things, which are discussed in more detail here.

Ranged queries

Main query, which needs to fetch all the documents, can impose a read lock on the whole table and stall the concurrent queries (eg. INSERTs to MyISAM table), waste a lot of memory for result set, etc. To avoid this, Sphinx supports so-called ranged queries. With ranged queries, Sphinx first fetches min and max document IDs from the table, and then substitutes different ID intervals into main query text and runs the modified query to fetch another chunk of documents. Here's an example.

Example 1. Ranged query usage example

# in sphinx.conf

sql_query_range	= SELECT MIN(id),MAX(id) FROM documents
sql_range_step = 1000
sql_query = SELECT * FROM documents WHERE id>=$start AND id<=$end

If the table contains document IDs from 1 to, say, 2345, then sql_query would be run three times:

  1. with $start replaced with 1 and $end replaced with 1000;
  2. with $start replaced with 1001 and $end replaced with 2000;
  3. with $start replaced with 2000 and $end replaced with 2345.

Obviously, that's not much of a difference for 2000-row table, but when it comes to indexing 10-million-row MyISAM table, ranged queries might be of some help.

sql_post vs. sql_post_index

The difference between post-query and post-index query is in that post-query is run immediately when Sphinx received all the documents, but further indexing may still fail for some other reason. On the contrary, by the time the post-index query gets executed, it is guaranteed that the indexing was succesful. Database connection is dropped and re-established because sorting phase can be very lengthy and would just timeout otherwise.

3.7. XMLpipe data source

XMLpipe data source is designed to enable users to plug data into Sphinx without having to implement new data sources drivers themselves.

To use XMLpipe, configure the data source in your configuration file as follows:

source example_xmlpipe_source
{
    type = xmlpipe
    xmlpipe_command = perl /www/mysite.com/bin/sphinxpipe.pl
}

The indexer will run the command specified in xmlpipe_command, and then read, parse and index the data it prints to stdout.

XMLpipe driver expects the data to be in special XML format. Here's the example document stream, consisting of two documents:

Example 2. XMLpipe document stream

<document>
<id>123</id>
<group>45</group>
<timestamp>1132223498</timestamp>
<title>test title</title>
<body>
this is my document body
</body>
</document>

<document>
<id>124</id>
<group>46</group>
<timestamp>1132223498</timestamp>
<title>another test</title>
<body>
this is another document
</body>
</document>


At the moment, the driver is using a custom manually written parser which is pretty fast but really strict; so almost all the fields must be present, formatted exactly as in this example, and occur exactly in this order. The only optional field is timestamp; it's set to 1 if it's missing.

3.8. Live index updates

There's a frequent situation when the total dataset is too big to be reindexed from scratch often, but the amount of new records is rather small. Example: a forum with a 1,000,000 archived posts, but only 1,000 new posts per day.

In this case, "live" (almost real time) index updates could be implemented using so called "main+delta" scheme.

The idea is to set up two sources and two indexes, with one "main" index for the data which only changes rarely (if ever), and one "delta" for the new documents. In the example above, 1,000,000 archived posts would go to the main index, and newly inserted 1,000 posts/day would go to the delta index. Delta index could then be reindexed very frequently, and the documents can be made available to search in a matter of minutes.

Specifying which documents should go to what index and reindexing main index could also be made fully automatical. One option would be to make a counter table which would track the ID which would split the documents, and update it whenever the main index is reindexed.

Example 3. Fully automated live updates

# in MySQL
CREATE TABLE sph_counter
(
    counter_id INTEGER PRIMARY KEY NOT NULL,
    max_doc_id INTEGER NOT NULL
);

# in sphinx.conf
source main
{
    # ...
    sql_query_pre = REPLACE INTO sph_counter SELECT 1, MAX(id) FROM documents
    sql_query = SELECT id, title, body FROM documents \
        WHERE id<=( SELECT max_doc_id FROM sph_counter WHERE counter_id=1 )
}

source delta : main
{
    sql_query_pre =
    sql_query = SELECT id, title, body FROM documents \
        WHERE id>( SELECT max_doc_id FROM sph_counter WHERE counter_id=1 )
}

index main
{
    source = main
    path = /path/to/main
    # ... all the other settings
}

# note how all other settings are copied from main,
# but source and path are overridden (they MUST be)
index delta : main
{
    source = delta
    path = /path/to/delta
}


4. Searching

4.1. Matching modes

There are the following matching modes available:

  • SPH_MATCH_ALL, matches all query words (default mode);
  • SPH_MATCH_ANY, matches any of the query words;
  • SPH_MATCH_PHRASE, matches query as a phrase, requiring perfect match;
  • SPH_MATCH_BOOLEAN, matches query as a boolean expression (see Section 4.2, “Boolean query syntax”);
  • SPH_MATCH_EXTENDED, matches query as an expression in Sphinx internal query language (see Section 4.3, “Extended query syntax”).

4.2. Boolean query syntax

Boolean queries allow the following special operators to be used:

  • explicit operator AND:
    hello & world
  • operator OR:
    hello | world
  • operator NOT:
    hello -world
    hello !world
    
  • grouping:
    ( hello world )

Here's an example query which uses all these operators:

Example 4. Boolean query example

( cat -dog ) | ( cat -mouse)


There always is implicit AND operator, so "hello world" query actually means "hello & world".

OR operator precedence is higher than AND, so "looking for cat | dog | mouse" means "looking for ( cat | dog | mouse )" and not "(looking for cat) | dog | mouse".

Queries like "-dog", which implicitly include all documents from the collection, can not be evaluated. This is both for technical and performance reasons. Technically, Sphinx does not always keep a list of all IDs. Performance-wise, when the collection is huge (ie. 10-100M documents), evaluating such queries could take very long.

4.3. Extended query syntax

Extended queries allow the following special operators to be used:

  • operator OR:
    hello | world
  • operator NOT:
    hello -world
    hello !world
    
  • field search operator:
    @title hello @body world
  • phrase search operator:
    "hello world"
  • proximity search operator:
    "hello world"~10

Here's an example query which uses all these operators:

Example 5. Extended query example

"hello world" @title "example program"~5 @body python -(php|perl)


There always is implicit AND operator, so "hello world" means that both "hello" and "world" must be present in matching document.

OR operator precedence is higher than AND, so "looking for cat | dog | mouse" means "looking for ( cat | dog | mouse )" and not "(looking for cat) | dog | mouse".

Proximity distance is specified in words, adjusted for word count, and applies to all words within quotes. For instance, "cat dog mouse"~5 query means that there must be less than 8-word span which contains all 3 words, ie. "CAT aaa bbb ccc DOG eee fff MOUSE" document will not match this query, because this span is exactly 8 words long.

Nested brackets, as in queries like

aaa | ( bbb ccc | ( ddd eee ) )

are not allowed yet, but this will be fixed.

Negation (ie. operator NOT) is only allowed on top level and not within brackets (ie. groups). This isn't going to change, because supporting nested negations would make phrase ranking implementation way too complicated.

4.4. Weighting

Specific weighting function (currently) depends on the search mode.

There are these major parts which are used in the weighting functions:

  1. phrase rank,
  2. statistical rank.

Phrase rank is based on a length of longest common subsequence (LCS) of search words between document body and query phrase. So if there's a perfect phrase match in some document then its phrase rank would be the highest possible, and equal to query words count.

Statistical rank is based on classic BM25 function which only takes word frequencies into account. If the word is rare in the whole database (ie. low frequency over document collection) or mentioned a lot in specific document (ie. high frequency over matching document), it receives more weight. Final BM25 weight is a floating point number between 0 and 1.

In all modes, per-field weighted phrase ranks are computed as a product of LCS multiplied by per-field weight speficifed by user. Per-field weights are integer, default to 1, and can not be set lower than 1.

In SPH_MATCH_BOOLEAN mode, no weighting is performed at all, every match weight is set to 1.

In SPH_MATCH_ALL and SPH_MATCH_PHRASE modes, final weight is a sum of weighted phrase ranks.

In SPH_MATCH_ANY mode, the idea is essentially the same, but it also adds a count of matching words in each field. Before that, weighted phrase ranks are additionally mutliplied by a value big enough to guarantee that higher phrase rank in any field will make the match ranked higher, even if it's field weight is low.

In SPH_MATCH_EXTENDED mode, final weight is a sum of weighted phrase ranks and BM25 weight, multiplied by 1000 and rounded to integer.

This is going to be changed, so that MATCH_ALL and MATCH_ANY modes use BM25 weights as well. This would improve search results in those match spans where phrase ranks are equal; this is especially useful for 1-word queries.

The key idea (in all modes, besides boolean) is that better subphrase matches are ranked higher, and perfect matches are pulled to the top. Author's experience is that this phrase proximity based ranking provides noticeably better search quality than any statistical scheme alone (such as BM25, which is commonly used in other search engines).

4.5. Sorting modes

There are the following result sorting modes available:

  • SPH_SORT_RELEVANCE, sorts by relevance in descending order (best matches first);
  • SPH_SORT_ATTR_DESC, sorts by attribute in descending order (bigger attribute values first);
  • SPH_SORT_ATTR_ASC, sorts by attribute in ascending order (smaller attribute values first);
  • SPH_SORT_TIME_SEGMENTS, sorts by time segments (last hour/day/week/month) in descending order, and then by relevance in descending order;
  • SPH_SORT_EXTENDED, sorts by SQL-like expression.

SPH_SORT_ATTR_ASC, SPH_SORT_ATTR_DESC and SPH_SORT_TIME_SEGMENTS modes require an attribute to sort by to be specified.

SPH_SORT_TIME_SEGMENTS mode

In SPH_SORT_TIME_SEGMENTS mode, attribute values are split into so-called time segments, and then sorted by time segment first, and by relevance second.

The segments are calculated according to the current timestamp at the time when the search is performed, so the results would change over time. The segments are as follows:

  • last hour,
  • last day,
  • last week,
  • last month,
  • last 3 months,
  • everything else.

These segments are hardcoded, but it is trivial to change them if necessary.

This mode was added to support searching through blogs, news headlines, etc. When using time segments, recent records would be ranked higher because of segment, but withing the same segment, more relevant records would be ranked higher - unlike sorting by just the timestamp attribute, which would not take relevance into account at all.

SPH_SORT_EXTENDED mode

In SPH_SORT_EXTENDED mode, you would specify an SQL-like sort expression to sort by:

@relevance DESC, price ASC, @id DESC

Both internal attributes (their names start with @) and externally specified user attributes (their names are as is) can be allowed. In the example above, @relevance and @id are internal attributes and price is user-speficied.

Known internal attributes are:

  • @id (match ID)
  • @rank (match weight)
  • @weight (match weight)
  • @relevance (match weight)

@rank, @weight and @relevance are just aliases; there's no actual difference between them.

4.6. Grouping (clustering) search results

Sometimes it could be useful to group (or in other terms, cluster) search results and/or count per-group match counts - for instance, to draw a nice graph of how much maching blog posts were there per each month; or to group Web search results by site; or to group matching forum posts by author; etc.

In theory, this could be performed by doing only the full-text search in Sphinx and then using found IDs to group on SQL server side. However, in practice doing this with a big result set (10K-10M matches) would typically kill performance.

To avoid that, Sphinx offers so-called grouping mode. It is enabled with SetGroupBy() API call. When grouping, all matches are assigned to different groups based on group-by value. This value is computed from specified attribute using one of the following built-in functions:

  • SPH_GROUPBY_DAY, extracts year, month and day in YYYYMMDD format from timestamp;
  • SPH_GROUPBY_WEEK, extracts year and first day of the week number (counting from year start) in YYYYNNN format from timestamp;
  • SPH_GROUPBY_MONTH, extracts month in YYYYMM format from timestamp;
  • SPH_GROUPBY_YEAR, extracts year in YYYY format from timestamp;
  • SPH_GROUPBY_ATTR, uses attribute value itself for grouping.

The final search result set then contains one best match per group. Grouping function value and per-group match count are returned along as "virtual" attributes named @group and @count respectively.

The result set is sorted by group-by sorting clause, with the syntax similar to SPH_SORT_EXTENDED sorting clause syntax. In addition to @id and @weight, group-by sorting clause may also include:

  • @group (groupby function value),
  • @count (amount of matches in group).

The default mode is to sort by groupby value in descending order, ie. by "@group desc".

On completion, total_found result parameter would contain total amount of matching groups over he whole index.

WARNING: grouping is done in fixed memory and thus its results are only approximate; so there might be more groups reported in total_found than actually present. @count might also be underestimated. To reduce inaccuracy, one should raise max_matches. If max_matches allows to store all found groups, results will be 100% correct.

For example, if sorting by relevance and grouping by "published" attribute with SPH_GROUPBY_DAY function, then the result set will contain

  • one most relevant match per each day when there were any matches published,
  • with day number and per-day match count attached,
  • sorted by day number in descending order (ie. recent days first).

4.7. Distributed searching

To scale well, Sphinx has distributed searching capabilities. Distributed searching is useful to improve query latency (ie. search time) and throughput (ie. max queries/sec) in multi-server, multi-CPU or multi-core environments. This is essential for applications which need to search through huge amounts data (ie. billions of records and terabytes of text).

The key idea is to horizontally partition (HP) searched data accross search nodes and then process it in parallel.

Partitioning is done manually. You would

  • setup several instances of Sphinx programs (indexer and searchd) on different servers;
  • make the instances index (and search) different parts of data;
  • configure a special distributed index on some of the searchd instances;
  • and query this index.

This index only contains references to other local and remote indexes - so it could not be directly reindexed, and you should reindex those indexes which it references instead.

When searchd receives a query against distributed index, it does the following:

  1. connects to configured remote agents;
  2. issues the query;
  3. sequentially searches configured local indexes (while the remote agents are searching);
  4. retrieves remote agents' search results;
  5. merges all the results together, removing the duplicates;
  6. sends the merged resuls to client.

From the application's point of view, there are no differences between usual and distributed index at all.

Any searchd instance could serve both as a master (which aggregates the results) and a slave (which only does local searching) at the same time. This has a number of uses:

  1. every machine in a cluster could serve as a master which searches the whole cluster, and search requests could be balanced between masters to achieve a kind of HA (high availability) in case any of the nodes fails;
  2. if running within a single multi-CPU or multi-core machine, there would be only 1 searchd instance quering itself as an agent and thus utilizing all CPUs/core.

It is scheduled to implement better HA support which would allow to specify which agents mirror each other, do health checks, keep track of alive agents, load-balance requests, etc.

4.8. searchd query log format

searchd logs all succesfully executed search queries into query log file. Here's an example:

[Fri Jun 29 21:17:58 2007] 0.004 sec [all/0/rel 35254 (0,20)] [lj] test
[Fri Jun 29 21:20:34 2007] 0.024 sec [all/0/rel 19886 (0,20) @channel_id] [lj] test

This log format is as follows:

[query-date] query-time [match-mode/filters-count/sort-mode
    total-matches (offset,limit) @groupby-attr] [index-name] query

Match mode can take one of the following values:

  • "all" for SPH_MATCH_ALL mode;
  • "any" for SPH_MATCH_ANY mode;
  • "phr" for SPH_MATCH_PHRASE mode;
  • "bool" for SPH_MATCH_BOOLEAN mode;
  • "ext" for SPH_MATCH_EXTENDED mode.

Sort mode can take one of the following values:

  • "rel" for SPH_SORT_RELEVANCE mode;
  • "attr-" for SPH_SORT_ATTR_DESC mode;
  • "attr+" for SPH_SORT_ATTR_ASC mode;
  • "tsegs" for SPH_SORT_TIME_SEGMENTS mode;
  • "ext" for SPH_SORT_EXTENDED mode.

5. API reference

There is a number of native searchd client API implementations for Sphinx. As of time of this writing, we officially support our own PHP, Python, and Java implementations. There also are third party free, open-source API implementations for Perl, Ruby, and C++.

The reference API implementation is in PHP, because (we believe) Sphinx is most widely used with PHP than any other language. This reference documentation is in turn based on reference PHP API, and all code samples in this section will be given in PHP.

However, all other APIs provide the same methods and implement the very same network protocol. Therefore the documentation does apply to them as well. There might be minor differences as to the method naming conventions or specific data structures used. But the provided functionality must not differ across languages.

5.1. General API functions

5.1.1. GetLastError

Prototype: function GetLastError()

Returns last error message, as a string, in human readable format. If there were no errors during the previous API call, empty string is returned.

You should call it when any other function (such as Section 5.6.1, “Query”) fails (typically, the failing function returns false). The returned string will contain the error description.

The error message is not reset by this call; so you can safely call it several times if needed.

5.1.2. GetLastWarning

Prototype: function GetLastWarning ()

Returns last warning message, as a string, in human readable format. If there were no warnings during the previous API call, empty string is returned.

You should call it to verify whether your request (such as Section 5.6.1, “Query”) was completed but with warnings. For instance, search query against a distributed index might complete succesfully even if several remote agents timed out. In that case, a warning message would be produced.

The warning message is not reset by this call; so you can safely call it several times if needed.

5.1.3. SetServer

Prototype: function SetServer ( $host, $port )

Sets searchd host name and TCP port. All subsequent requests will use the new host and port settings. Default host and port are "localhost" and 3312, respectively.

5.1.4. SetArrayResult

Prototype: function SetArrayResult ( $arrayresult )

PHP specific. Controls matches format in the search results set (whether matches should be returned as an array or a hash).

$arrayresult argument must be boolean. If $arrayresult is false (the default mode), matches will returned in PHP hash format with document IDs as keys, and other information (weight, attributes) as values. If $arrayresult is true, matches will be returned as a plain array with complete per-match information including document ID.

Introduced along with GROUP BY support on MVA attributes. Group-by-MVA result sets may contain duplicate document IDs. Thus they need to be returned as plain arrays, because hashes will only keep one entry per document ID.

5.2. General query settings

5.2.1. SetLimits

Prototype: function SetLimits ( $offset, $limit, $max_matches=0, $cutoff=0 )

Sets offset into server-side result set ($offset) and amount of matches to return to client starting from that offset ($limit). Can additionally control maximum server-side result set size for current query ($max_matches) and the threshold amount of matches to stop searching at ($cutoff). All parameters must be non-negative integers.

First two parameters to SetLimits() are identical in behaviour to MySQL LIMIT clause. They instruct searchd to return at most $limit matches starting from match number $offset. The default offset and limit settings are 0 and 20, that is, to return first 20 matches.

max_matches setting controls how much matches searchd will keep in RAM while searching. All matching documents will be normally processed, ranked, filtered, and sorted even if max_matches is set to 1. But only best N documents are stored in memory at any given moment for performance and RAM usage reasons, and this setting controls that N. Note that there are two places where max_matches limit is enforced. Per-query limit is controlled by this API call, but there also is per-server limit controlled by max_matches setting in the config file. To prevent RAM usage abuse, server will not allow to set per-query limit higher than the per-server limit.

You can't retrieve more than max_matches matches to the client application. The default limit is set to 1000. Normally, you must not have to go over this limit. One thousand records is enough to present to the end user. And if you're thinking about pulling the results to application for further sorting or filtering, that would be much more efficient if performed on Sphinx side.

$cutoff setting is intended for advanced performance control. It tells searchd to forcibly stop search query once $cutoff matches had been found and processed.

5.2.2. SetMaxQueryTime

Prototype: function SetMaxQueryTime ( $max_query_time )

Sets maximum search query time, in milliseconds. Parameter must be a non-negative integer. Default valus is 0 which means "do not limit".

Similar to $cutoff setting from Section 5.2.1, “SetLimits”, but limits elapsed query time instead of processed matches count. Local search queries will be stopped once that much time has elapsed. Note that if you're performing a search which queries several local indexes, this limit applies to each index separately.

5.3. Full-text search query settings

5.3.1. SetMatchMode

Prototype: function SetMatchMode ( $mode )

Sets full-text query matching mode, as described in Section 4.1, “Matching modes”. Parameter must be a constant specifying one of the known modes.

WARNING: (PHP specific) you must not take the matching mode constant name in quotes, that syntax specifies a string and is incorrect:

$cl->SetMatchMode ( "SPH_MATCH_ANY" ); // INCORRECT! will not work as expected
$cl->SetMatchMode ( SPH_MATCH_ANY ); // correct, works OK

5.3.2. SetRankingMode

Prototype: function SetRankingMode ( $ranker )

Sets ranking mode. Only available in SPH_MATCH_EXTENDED2 matching mode at the time of this writing. Parameter must be a constant specifying one of the known modes.

By default, Sphinx computes two factors which contribute to the final match weight. The major part is query phrase proximity to document text. The minor part is so-called BM25 statistical function, which varies from 0 to 1 depending on the keyword frequency within document (more occurrences yield higher weight) and within the whole index (more rare keywords yield higher weight).

However, in some cases you'd want to compute weight differently - or maybe avoid computing it at all for performance reasons because you're sorting the result set by something else anyway. This can be accomplished by setting the appropriate ranking mode.

Currently implemented modes are:

  • SPH_RANK_PROXIMITY_BM25, default ranking mode which uses and combines both phrase proximity and BM25 ranking.
  • SPH_RANK_BM25, statistical ranking mode which uses BM25 ranking only (similar to most other full-text engines). This mode is faster but may result in worse quality on queries which contain more than 1 keyword.
  • SPH_RANK_NONE, disabled ranking mode. This mode is the fastest. It is essentially equivalent to boolean searching. A weight of 1 is assigned to all matches.

5.3.3. SetSortMode

Prototype: function SetSortMode ( $mode, $sortby="" )

Set matches sorting mode, as described in Section 4.5, “Sorting modes”. Parameter must be a constant specifying one of the known modes.

WARNING: (PHP specific) you must not take the matching mode constant name in quotes, that syntax specifies a string and is incorrect:

$cl->SetSortMode ( "SPH_SORT_ATTR_DESC" ); // INCORRECT! will not work as expected
$cl->SetSortMode ( SPH_SORT_ATTR_ASC ); // correct, works OK

5.3.4. SetWeights

Prototype: function SetWeights ( $weights )

Binds per-field weights in the order of appearance in the index. DEPRECATED, use Section 5.3.5, “SetFieldWeights” instead.

5.3.5. SetFieldWeights

Prototype: function SetFieldWeights ( $weights )

Binds per-field weights by name. Parameter must be a hash (associative array) mapping string field names to integer weights.

Match ranking can be affected by per-field weights. For instance, see Section 4.4, “Weighting” for an explanation how phrase proximity ranking is affected. This call lets you specify what non-default weights to assign to different full-text fields.

The weights must be positive 32-bit integers. The final weight will be a 32-bit integer too. Default weight value is 1. Unknown field names will be silently ignored.

There is no enforced limit on the maximum weight value at the moment. However, beware that if you set it too high you can start hitting 32-bit wraparound issues. For instance, if you set a weight of 10,000,000 and search in extended mode, then maximum possible weight will be equal to 10 million (your weight) by 1 thousand (internal BM25 scaling factor, see Section 4.4, “Weighting”) by 1 or more (phrase proximity rank). The result is at least 10 billion that does not fit in 32 bits and will be wrapped around, producing unexpected results.

5.3.6. SetIndexWeights

Prototype: function SetIndexWeights ( $weights )

Sets per-index weights, and enables weighted summing of match weights across different indexes. Parameter must be a hash (associative array) mapping string index names to integer weights. Default is empty array that means to disable weighting summing.

When a match with the same document ID is found in several different local indexes, by default Sphinx simply chooses the match from the index specified last in the query. This is to support searching through partially overlapping index partitions.

However in some cases the indexes are not just partitions, and you might want to sum the weights across the indexes instead of picking one. SetIndexWeights() lets you do that. With summing enabled, final match weight in result set will be computed as a sum of match weight coming from the given index multiplied by respective per-index weight specified in this call. Ie. if the document 123 is found in index A with the weight of 2, and also in index B with the weight of 3, and you called SetIndexWeights ( array ( "A"=>100, "B"=>10 ) ), the final weight return to the client will be 2*100+3*10 = 230.

5.4. Result set filtering settings

5.4.1. SetIDRange

Prototype: function SetIDRange ( $min, $max )

Sets an accepted range of document IDs. Parameters must be integers. Defaults are 0 and 0; that combination means to not limit by range.

After this call, only those records that have document ID between $min and $max (including IDs exactly equal to $min or $max) will be matched.

5.4.2. SetFilter

Prototype: function SetFilter ( $attribute, $values, $exclude=false )

Adds new integer values set filter.

On this call, additional new filter is added to the existing list of filters. $attribute must be a string with attribute name. $values must be a plain array containing integer values. $exclude must be a boolean value; it controls whether to accept the matching documents (default mode, when $exclude is false) or reject them.

Only those documents where $attribute column value stored in the index matches any of the values from $values array will be matched (or rejected, if $exclude is true).

5.4.3. SetFilterRange

Prototype: function SetFilterRange ( $attribute, $min, $max, $exclude=false )

Adds new integer range filter.

On this call, additional new filter is added to the existing list of filters. $attribute must be a string with attribute name. $min and $max must be integers that define the acceptable attribute values range (including the boundaries). $exclude must be a boolean value; it controls whether to accept the matching documents (default mode, when $exclude is false) or reject them.

Only those documents where $attribute column value stored in the index is between $min and $max (including values that are exactly equal to $min or $max) will be matched (or rejected, if $exclude is true).

5.4.4. SetFilterFloatRange

Prototype: function SetFilterFloatRange ( $attribute, $min, $max, $exclude=false )

Adds new float range filter.

On this call, additional new filter is added to the existing list of filters. $attribute must be a string with attribute name. $min and $max must be floats that define the acceptable attribute values range (including the boundaries). $exclude must be a boolean value; it controls whether to accept the matching documents (default mode, when $exclude is false) or reject them.

Only those documents where $attribute column value stored in the index is between $min and $max (including values that are exactly equal to $min or $max) will be matched (or rejected, if $exclude is true).

5.4.5. SetGeoAnchor

Prototype: function SetGeoAnchor ( $attrlat, $attrlong, $lat, $long )

Sets anchor point for and geosphere distance (geodistance) calculations, and enable them.

$attrlat and $attrlong must be strings that contain the names of latitude and longitude attributes, respectively. $lat and $long are floats that specify anchor point latitude and longitude, in radians.

Once an anchor point is set, you can use magic "@geodist" attribute name in your filters and/or sorting expressions. Sphinx will compute geosphere distance between the given anchor point and a point specified by latitude and lognitude attributes from each full-text match, and attach this value to the resulting match. The latitude and longitude values both in SetGeoAnchor and the index attribute data are expected to be in radians. The result will be returned in meters, so geodistance value of 1000.0 means 1 km. 1 mile is approximately 1609.344 meters.

5.5. GROUP BY settings

5.5.1. SetGroupBy

Prototype: function SetGroupBy ( $attribute, $func, $groupsort="@group desc" )

Sets grouping attribute, function, and groups sorting mode; and enables grouping (as described in Section 4.6, “Grouping (clustering) search results ”).

$attribute is a string that contains group-by attribute name. $func is a constant that chooses a function applied to the attribute value in order to compute group-by key. $groupsort is a clause that controls how the groups will be sorted. Its syntax is similar to that described in Section 4.5, “SPH_SORT_EXTENDED mode”.

Grouping feature is very similar in nature to GROUP BY clause from SQL. Results produces by this function call are going to be the same as produced by the following pseudo code:

SELECT ... GROUP BY $func($attribute) ORDER BY $groupsort

Note that it's $groupsort that affects the order of matches in the final result set. Sorting mode (Section 5.3.3, “SetSortMode”) affect the ordering of matches within group, ie. what match will be selected as the best one from the group. So you can for instance order the groups by matches count and select the most relevant match within each group at the same time.

5.5.2. SetGroupDistinct

Prototype: function SetGroupDistinct ( $attribute )

Sets attribute name for per-group distinct values count calculations. Only available for grouping queries.

$attribute is a string that contains the attribute name. For each group, all values of this attribute will be stored (as RAM limits permit), then the amount of distinct values will be calculated and returned to the client. This feature is similar to COUNT(DISTINCT) clause in standard SQL; so these Sphinx calls:

$cl->SetGroupBy ( "category", SPH_GROUPBY_ATTR, "@count desc" );
$cl->SetGroupDistinct ( "vendor" );

can be expressed using the following SQL clauses:

SELECT id, weight, all-attributes,
	COUNT(DISTINCT vendor) AS @distinct,
	COUNT(*) AS @count
FROM products
GROUP BY category
ORDER BY @count DESC

In the sample pseudo code shown just above, SetGroupDistinct() call corresponds to COUNT(DISINCT vendor) clause only. GROUP BY, ORDER BY, and COUNT(*) clauses are all an equivalent of SetGroupBy() settings. Both queries will return one matching row for each category. In addition to indexed attributes, matches will also contain total per-category matches count, and the count of distinct vendor IDs within each category.

5.6. Querying

5.6.1. Query

Prototype: function Query ( $query, $index="*" )

Connects to searchd server, runs given search query with current settings, obtains and returns the result set.

$query is a query string. $index is an index name (or names) string. Returns false and sets GetLastError() message on general error. Returns search result set on success.

Default value for $index is "*" that means to query all local indexes. Characters allowed in index names include Latin letters (a-z), numbers (0-9), minus sign (-), and underscore (_); everything else is considered a separator. Therefore, all of the following samples calls are valid and will search the same two indexes:

$cl->Query ( "test query", "main delta" );
$cl->Query ( "test query", "main;delta" );
$cl->Query ( "test query", "main, delta" );

Index specification order matters. If document with identical IDs are found in two or more indexes, weight and attribute values from the very last matching index will be used for sorting and returning to client (unless explicitly overridden with Section 5.3.6, “SetIndexWeights”). Therefore, in the example above, matches from "delta" index will always win over matches from "main".

On success, Query() returns a result set that contains some of the found matches (as requested by Section 5.2.1, “SetLimits”) and additional general per-query statistics. The result set is a hash (PHP specific; other languages might utilize other structures instead of hash) with the following keys and values:

"matches":
Hash which maps found document IDs to another small hash containing document weight and attribute values (or an array of the similar small hashes if Section 5.1.4, “SetArrayResult” was enabled).
"total":
Total amount of matches retrieved on server (ie. to the server side result set) by this query. You can retrieve up to this amount of matches from server for this query text with current query settings.
"total_found":
Total amount of matching documents in index (that were found and procesed on server).
"words":
Hash which maps query keywords (case-folded, stemmed, and otherwise processed) to a small hash with per-keyword statitics ("docs", "hits").
"error":
Query error message reported by searchd (string, human readable). Empty if there were no errors.
"warning":
Query warning message reported by searchd (string, human readable). Empty if there were no warnings.

5.6.2. AddQuery

Prototype: function AddQuery ( $query, $index="*" )

Adds additional query with current settings to multi-query batch. $query is a query string. $index is an index name (or names) string. Returns index to results array returned from Section 5.6.3, “RunQueries”.

Batch queries (or multi-queries) enable searchd to perform internal optimizations if possible. They also reduce network connection overheads and search process creation overheads in all cases. They do not result in any additional overheads compared to simple queries. Thus, if you run several different queries from your web page, you should always consider using multi-queries.

For instance, running the same full-text query but with different sorting or group-by settings will enable searchd to perform expensive full-text search and ranking operation only once, but compute multiple group-by results from its output.

This can be a big saver when you need to display not just plain search results but also some per-category counts, such as the amount of products grouped by vendor. Without multi-query, you would have to run several queries which perform essentially the same search and retrieve the same matches, but create result sets differently. With multi-query, you simply pass all these querys in a single batch and Sphinx optimizes the redundant full-text search internally.

AddQuery() internally saves full current settings state along with the query, and you can safely change them afterwards for subsequent AddQuery() calls. Already added queries will not be affected; there's actually no way to change them at all. Here's an example:

$cl->SetSortMode ( SPH_SORT_RELEVANCE );
$cl->AddQuery ( "hello world", "documents" );

$cl->SetSortMode ( SPH_SORT_ATTR_DESC, "price" );
$cl->AddQuery ( "ipod", "products" );

$cl->AddQuery ( "harry potter", "books" );

$results = $cl->RunQueries ();

With the code above, 1st query will search for "hello world" in "documents" index and sort results by relevance, 2nd query will search for "ipod" in "products" index and sort results by price, and 3rd query will search for "harry potter" in "books" index while still sorting by price. Note that 2nd SetSortMode() call does not affect the first query (because it's already added) but affects both other subsequent queries.

AddQuery() does not modify the current state. That is, all current sorting, filtering, and grouping settings will not be affected by this call; so subsequent queries can easily reuse current query settings.

AddQuery() returns an index into an array of results that will be returned from RunQueries() call. It is simply a sequentially increasing 0-based integer, ie. first call will return 0, second will return 1, and so on. Just a small helper so you won't have to track the indexes manualy if you need then.

5.6.3. RunQueries

Prototype: function RunQueries ()

Connect to searchd, runs a batch of all queries added using AddQuery(), obtains and returns the result sets. Returns false and sets GetLastError() message on general error (such as network I/O failure). Returns a plain array of result sets on success.

Each result set in the returned array is exactly the same as the result set returned from Query().

Note that the batch query request itself almost always succeds - unless there's a network error, blocking index rotation in progress, or another general failure which prevents the whole request from being processed.

However individual queries within the batch might very well fail. In this case their respective result sets will contain non-empty "error" message, but no matches or query statistics. In the extreme case all queries within the batch could fail. There still will be no general error reported, because API was able to succesfully connect to searchd, submit the batch, and receive the results - but every result set will have a specific error message.

5.6.4. ResetFilters

Prototype: function ResetFilters ()

Clears all currently set filters.

This call is only normally required when using multi-queries. You might want to set different filters for different queries in the batch. To do that, you should call ResetFilters() and add new filters using the respective calls.

5.6.5. ResetGroupBy

Prototype: function ResetGroupBy ()

Clears all currently group-by settings, and disables group-by.

This call is only normally required when using multi-queries. You can change individual group-by settings using SetGroupBy() and SetGroupDistinct() calls, but you can not disable group-by using those calls. ResetGroupBy() fully resets previous group-by settings and disables group-by mode in the current state, so that subsequent AddQuery() calls can perform non-grouping searches.

5.7. Additional functionality

5.7.1. BuildExcerpts

Prototype: function BuildExcerpts ( $docs, $index, $words, $opts=array() )

Excerpts (snippets) builder function. Connects to searchd, asks it to generate excerpts (snippets) from given documents, and returns the results.

$docs is a plain array of strings that carry the documents' contents. $index is an index name string. Different settings (such as charset, morphology, wordforms) from given index will be used. $words is a string that contains the keywords to highlight. They will be processed with respect to index settings. For instance, if English stemming is enabled in the index, "shoes" will be highlighted even if keyword is "shoe". $opts is a hash which contains additional optional highlighting parameters:

"before_match":
A string to insert before a keyword match. Default is "<b>".
"after_match":
A string to insert after a keyword match. Default is "<b>".
"chunk_separator":
A string to insert between snippet chunks (passages). Default is " ... ".
"limit":
Maximum snippet size, in symbols (codepoints). Integer, default is 256.
"around":
How much words to pick around each matching keywords block. Integer, default is 5.
"exact_phrase":
Whether to highlight exact query phrase matches only instead of individual keywords. Boolean, default is false.
"single_passage":
whether to extract single best passage only. Boolean, default is false.

Returns false on failure. Returns a plain array of strings with excerpts (snippets) on success.

5.7.2. UpdateAttributes

Prototype: function UpdateAttributes ( $index, $attrs, $values )

Instantly updates given attribute values in given documents. Returns number of actually updated documents (0 or more) on success, or -1 on failure.

$index is a name of the index (or indexes) to be updated. $attrs is a plain array with string attribute names, listing attributes that are updated. $values is a hash where key is document ID, and value is a plain array of new attribute values.

$index can be either a single index name or a list, like in Query(). Unlike Query(), wildcard is not allowed and all the indexes to update must be specified explicitly. The list of indexes can include distributed index names. Updates on distributed indexes will be pushed to all agents.

The updates only work with docinfo=extern storage strategy. They are very fast because they're working fully in RAM, but they can also be made persistent: updates are saved on disk on clean searchd shutdown initiated by SIGTERM signal.

Usage example:

$cl->UpdateAttributes ( "test1", array("group_id"), array(1=>array(456)) );
$cl->UpdateAttributes ( "products", array ( "price", "amount_in_stock" ),
	array ( 1001=>array(123,5), 1002=>array(37,11), 1003=>(25,129) ) );

The first sample statement will update document 1 in index "test1", setting "group_id" to 456. The second one will update documents 1001, 1002 and 1003 in index "products". For document 1001, the new price will be set to 123 and the new amount in stock to 5; for document 1002, the new price will be 37 and the new amount will be 11; etc.

6. MySQL storage engine (SphinxSE)

6.1. SphinxSE overview

SphinxSE is MySQL storage engine which can be compiled into MySQL server 5.x using its pluggable architecure. It is not available for MySQL 4.x series. It also requires MySQL 5.0.22 or higher in 5.0.x series, or MySQL 5.1.12 or higher in 5.1.x series.

Despite the name, SphinxSE does not actually store any data itself. It is actually a built-in client which allows MySQL server to talk to searchd, run search queries, and obtain search results. All indexing and searching happen outside MySQL.

Obvious SphinxSE applications include:

  • easier porting of MySQL FTS applications to Sphinx;
  • allowing Sphinx use with progamming languages for which native APIs are not available yet;
  • optimizations when additional Sphinx result set processing on MySQL side is required (eg. JOINs with original document tables, additional MySQL-side filtering, etc).

6.2. Installing SphinxSE

You will need to obtain a copy of MySQL sources, prepare those, and then recompile MySQL binary. MySQL sources (mysql-5.x.yy.tar.gz) could be obtained from dev.mysql.com Web site.

For some MySQL versions, there are delta tarballs with already prepared source versions available from Sphinx Web site. After unzipping those over original sources MySQL would be ready to be configured and built with Sphinx support.

If such tarball is not available, or does not work for you for any reason, you would have to prepare sources manually. You will need to GNU Autotools framework (autoconf, automake and libtool) installed to do that.

6.2.1. Compiling MySQL 5.0.x with SphinxSE

Skips steps 1-3 if using already prepared delta tarball.

  1. copy sphinx.5.0.yy.diff patch file into MySQL sources directory and run

    patch -p1 < sphinx.5.0.yy.diff
    

    If there's no .diff file exactly for the specific version you need to build, try applying .diff with closest version numbers. It is important that the patch should apply with no rejects.

  2. in MySQL sources directory, run
    sh BUILD/autorun.sh
    
  3. in MySQL sources directory, create sql/sphinx directory in and copy all files in mysqlse directory from Sphinx sources there. Example:
    cp -R /root/builds/sphinx-0.9.7/mysqlse /root/builds/mysql-5.0.24/sql/sphinx
    
  4. configure MySQL and enable Sphinx engine:
    ./configure --with-sphinx-storage-engine
    
  5. build and install MySQL:
    make
    make install
    

6.2.2. Compiling MySQL 5.1.x with SphinxSE

Skip steps 1-2 if using already prepared delta tarball.

  1. in MySQL sources directory, create storage/sphinx directory in and copy all files in mysqlse directory from Sphinx sources there. Example:
    cp -R /root/builds/sphinx-0.9.7/mysqlse /root/builds/mysql-5.1.14/storage/sphinx
    
  2. in MySQL sources directory, run
    sh BUILD/autorun.sh
    
  3. configure MySQL and enable Sphinx engine:
    ./configure --with-plugins=sphinx
    
  4. build and install MySQL:
    make
    make install
    

6.2.3. Checking SphinxSE installation

To check whether SphinxSE has been succesfully compiled into MySQL, launch newly built servers, run mysql client and issue SHOW ENGINES query. You should see a list of all available engines. Sphinx should be present and "Support" column should contain "YES":
     
mysql> show engines;
+------------+----------+----------------------------------------------------------------+
| Engine     | Support  | Comment                                                        |
+------------+----------+----------------------------------------------------------------+
| MyISAM     | DEFAULT  | Default engine as of MySQL 3.23 with great performance         |
  ...
| SPHINX     | YES      | Sphinx storage engine                                          |
  ...
+------------+----------+----------------------------------------------------------------+
13 rows in set (0.00 sec)    

6.3. Using SphinxSE

To search via SphinxSE, you would need to create special ENGINE=SPHINX "search table", and then SELECT from it with full text query put into WHERE clause for query column.

Let's begin with an example create statement and search query:

CREATE TABLE t1
(
    id          INTEGER NOT NULL,
    weight      INTEGER NOT NULL,
    query       VARCHAR(3072) NOT NULL,
    group_id    INTEGER,
    INDEX(query)
) ENGINE=SPHINX CONNECTION="sphinx://localhost:3312/test";

SELECT * FROM t1 WHERE query='test it;mode=any';

First 3 columns of search table must be INTEGER, INTEGER and VARCHAR which will be mapped to document ID, match weight and search query accordingly. Query column must be indexed; all the others must be kept unindexed. Columns' names are ignored so you can use arbitrary ones.

Additional columns must be either INTEGER or TIMESTAMP. They will be bound to attributes provided in Sphinx result set by name, so their names must match attribute names specified in sphinx.conf. If there's no such attribute name in Sphinx search results, column will have NULL values.

Special "virtual" attributes names can also be bound to SphinxSE columns. _sph_ needs to be used instead of @ for that. For instance, to obtain @group and @count virtual attributes, use _sph_group and _sph_count column names.

CONNECTION string parameter can be used to specify default searchd host, port and indexes for queries issued using this table. If no connection string is specified in CREATE TABLE, index name "*" (ie. search all indexes) and localhost:3312 are assumed. Connection string syntax is as follows:

CONNECTION="sphinx://HOST:PORT/INDEXNAME"

You can change the default connection string later:

ALTER TABLE t1 CONNECTION="sphinx://NEWHOST:NEWPORT/NEWINDEXNAME";

You can also override all these parameters per-query.

As seen in example, both query text and search options should be put into WHERE clause on search query column (ie. 3rd column); the options are separated by semicolons; and their names from values by equality sign. Any number of options can be specified. Available options are:

  • query - query text;
  • mode - matching mode. Must be one of "all", "any", "phrase", "boolean", or "extended". Default is "all";
  • sort - match sorting mode. Must be one of "relevance", "attr_desc", "attr_asc", "time_segments", or "extended". In all modes besides "relevance" attribute name (or sorting clause for "extended") is also required after a colon:
    ... WHERE query='test;sort=attr_asc:group_id';
    ... WHERE query='test;sort=extended:@weight desc, group_id asc';
    
  • offset - offset into result set, default is 0;
  • limit - amount of matches to retrieve from result set, default is 20;
  • index - names of the indexes to search:
    ... WHERE query='test;index=test1;';
    ... WHERE query='test;index=test1,test2,test3;';
    
  • minid, maxid - min and max document ID to match;
  • weights - comma-separated list of weights to be assigned to Sphinx full-text fields:
    ... WHERE query='test;weights=1,2,3;';
    
  • filter, !filter - comma-separated attribute name and a set of values to match:
    # only include groups 1, 5 and 19
    ... WHERE query='test;filter=group_id,1,5,19;';
    
    # exclude groups 3 and 11
    ... WHERE query='test;!filter=group_id,3,11;';
    
  • range, !range - comma-separated attribute name, min and max value to match:
    # include groups from 3 to 7, inclusive
    ... WHERE query='test;range=group_id,3,7;';
    
    # exclude groups from 5 to 25
    ... WHERE query='test;!range=group_id,5,25;';
    
  • maxmatches - per-query max matches value:
    ... WHERE query='test;maxmatches=2000;';
    
  • groupby - group-by function and attribute:
    ... WHERE query='test;groupby=day:published_ts;';
    ... WHERE query='test;groupby=attr:group_id;';
    
  • groupsort - group-by sorting clause:
    ... WHERE query='test;groupsort=@count desc;';
    
  • indexweights - comma-separated list of index names and weights to use when searching through several indexes:
    ... WHERE query='test;indexweights=idx_exact,2,idx_stemmed,1;';
    

One very important note that it is much more efficient to allow Sphinx to perform sorting, filtering and slicing the result set than to raise max matches count and use WHERE, ORDER BY and LIMIT clauses on MySQL side. This is for two reasons. First, Sphinx does a number of optimizations and performs better than MySQL on these tasks. Second, less data would need to be packed by searchd, transferred and unpacked by SphinxSE.

Additional query info besides result set could be retrieved with SHOW ENGINE SPHINX STATUS statement:

mysql> SHOW ENGINE SPHINX STATUS;
+--------+-------+-------------------------------------------------+
| Type   | Name  | Status                                          |
+--------+-------+-------------------------------------------------+
| SPHINX | stats | total: 25, total found: 25, time: 126, words: 2 | 
| SPHINX | words | sphinx:591:1256 soft:11076:15945                | 
+--------+-------+-------------------------------------------------+
2 rows in set (0.00 sec)

You could perform JOINs on SphinxSE search table and tables using other engines. Here's an example with "documents" from example.sql:

mysql> SELECT content, date_added FROM test.documents docs
-> JOIN t1 ON (docs.id=t1.id) 
-> WHERE query="one document;mode=any";
+-------------------------------------+---------------------+
| content                             | docdate             |
+-------------------------------------+---------------------+
| this is my test document number two | 2006-06-17 14:04:28 | 
| this is my test document number one | 2006-06-17 14:04:28 | 
+-------------------------------------+---------------------+
2 rows in set (0.00 sec)

mysql> SHOW ENGINE SPHINX STATUS;
+--------+-------+---------------------------------------------+
| Type   | Name  | Status                                      |
+--------+-------+---------------------------------------------+
| SPHINX | stats | total: 2, total found: 2, time: 0, words: 2 | 
| SPHINX | words | one:1:2 document:2:2                        | 
+--------+-------+---------------------------------------------+
2 rows in set (0.00 sec)

7. Reporting bugs

Unfortunately, Sphinx is not yet 100% bug free (even though I'm working hard towards that), so you might occasionally run into some issues.

Reporting as much as possible about each bug is very important - because to fix it, I need to be able either to reproduce and debug the bug, or to deduce what's causing it from the information that you provide. So here are some instructions on how to do that.

Build-time issues

If Sphinx fails to build for some reason, please do the following:

  1. check that headers and libraries for your DBMS are properly installed (for instance, check that mysql-devel package is present);
  2. report Sphinx version and config file (be sure to remove the passwords!), MySQL (or PostgreSQL) configuration info, gcc version, OS version and CPU type (ie. x86, x86-64, PowerPC, etc):
    mysql_config
    gcc --version
    uname -a
    
  3. report the error message which is produced by configure or gcc (it should be to include error message itself only, not the whole build log).

Run-time issues

If Sphinx builds and runs, but there are any problems running it, please do the following:

  1. describe the bug (ie. both the expected behavior and actual behavior) and all the steps necessary to reproduce it;
  2. include Sphinx version and config file (be sure to remove the passwords!), MySQL (or PostgreSQL) version, gcc version, OS version and CPU type (ie. x86, x86-64, PowerPC, etc):
    mysql --version
    gcc --version
    uname -a
    
  3. build, install and run debug versions of all Sphinx programs (this is to enable a lot of additional internal checks, so-called assertions):
    make distclean
    ./configure --with-debug
    make install
    killall -TERM searchd
    
  4. reindex to check if any assertions are triggered (in this case, it's likely that the index is corrupted and causing problems);
  5. if the bug does not reproduce with debug versions, revert to non-debug and mention it in your report;
  6. if the bug could be easily reproduced with a small (1-100 record) part of your database, please provide a gzipped dump of that part;
  7. if the problem is related to searchd, include relevant entries from searchd.log and query.log in your bug report;
  8. if the problem is related to searchd, try running it in console mode and check if it dies with an assertion:
    ./searchd --console
    
  9. if any program dies with an assertion, provide the assertion message.

Debugging assertions, crashes and hangups

If any program dies with an assertion, crashes without an assertion or hangs up, you would additionally need to generate a core dump and examine it.

  1. enable core dumps. On most Linux systems, this is done using ulimit:
    ulimit -c 32768
    
  2. run the program and try to reproduce the bug;
  3. if the program crashes (either with or without an assertion), find the core file in current directory (it should typically print out "Segmentation fault (core dumped)" message);
  4. if the program hangs, use kill -SEGV from another console to force it to exit and dump core:
    kill -SEGV HANGED-PROCESS-ID
    
  5. use gdb to examine the core file and obtain a backtrace:
    gdb ./CRASHED-PROGRAM-FILE-NAME CORE-DUMP-FILE-NAME
    (gdb) bt
    (gdb) quit
    

Note that HANGED-PROCESS-ID, CRASHED-PROGRAM-FILE-NAME and CORE-DUMP-FILE-NAME must all be replaced with specific numbers and file names. For example, hanged searchd debugging session would look like:

# kill -SEGV 12345
# ls *core*
core.12345
# gdb ./searchd core.12345
(gdb) bt
...
(gdb) quit

Note that ulimit is not server-wide and only affects current shell session. This means that you will not have to restore any server-wide limits - but if you relogin, you will have to set ulimit again.

Core dumps should be placed in current working directory (and Sphinx programs do not change it), so this is where you would look for them.

Please do not immediately remove the core file because there could be additional helpful information which could be retrieved from it. You do not need to send me this file (as the debug info there is closely tied to your system) but I might need to ask you a few additional questions about it.

8. sphinx.conf options reference

8.1. Data source configuration options

8.1.1. type

Data source type. Available types are mysql, pgsql and xmlpipe.

This option is mandatory.

Example:
type = mysql

8.1.2. strip_html

Whether to strip HTML formatting from incoming full-text data. 0 means that stripping should be disabled; 1 that it should be enabled.

Stripping currently works with mysql and pgsql source, and is not yet implemented for xmlpipe. It should work with properly formed HTML (such as well-formed XHTML) but MAY bug on malformed HTML (such as with stray <'s or unclosed >'s).

This option is optional. Default value is 0 (do not strip HTML). This option only applies to mysql and pgsql source types.

Example:
strip_html = 0

8.1.3. index_html_attrs

Specifies which HTML attributes' contents still should be indexed when stripping HTML. The format is per-tag enumeration of indexable attributes, as shown in the example below.

This option is optional. Default value is empty (do not index anything). This option only applies to mysql and pgsql source types.

Example:
index_html_attrs = img=alt,title; a=title;

8.1.4. sql_host

SQL server host to connect to.

This option is mandatory. This option only applies to mysql and pgsql source types.

Example:
sql_host = localhost

8.1.5. sql_port

SQL server IP port to connect to.

This option is optional. Default value is 3306 for mysql source type and 5432 for pgsql type. This option only applies to mysql and pgsql source types.

Example:
sql_port = 3306

8.1.6. sql_user

SQL user to use on sql_host.

This option is mandatory. This option only applies to mysql and pgsql source types.

Example:
sql_user = test

8.1.7. sql_pass

SQL user password to use on sql_host.

This option is mandatory. This option only applies to mysql and pgsql source types.

Example:
sql_pass = mysecretpassword

8.1.8. sql_db

SQL database (in MySQL terms) to use after connection and perform further queries in.

This option is mandatory. This option only applies to mysql and pgsql source types.

Example:
sql_db = test

8.1.9. sql_sock

UNIX socket name to connect to local MySQL server.

On Linux, it would typically be /var/lib/mysql/mysql.sock. On FreeBSD, it would typically be /tmp/mysql.sock.

This option is optional. This option only applies to mysql source type.

Example:
sql_sock = /tmp/mysql.sock

8.1.10. sql_query_pre

Pre-fetch query, or pre-query.

There might be multiple pre-queries specified. They are executed before the main fetch query in exactly the same order they were specified in config file. Pre-query results are ignored.

Pre-queries are useful to setup encoding, or mark records which are going to be indexed, or update internal counters, etc.

This option is optional. This option only applies to mysql and pgsql source types.

Example:
sql_query_pre = SET CHARACTER_SET_RESULTS=utf-8

8.1.11. sql_query

Main document fetch query.

There can be only one main query. This is the query which is used to retrieve documents from SQL server.

You can specify up to 32 fields (formally, upto SPH_MAX_FIELDS from sphinx.h). All of the fields which are not document ID or attributes will be full-text indexed.

Document ID MUST be the very first field, and it MUST BE UNIQUE UNSIGNED NON-ZERO 32-BIT INTEGER NUMBER.

This option is mandatory. This option only applies to mysql and pgsql source types.

Example:
sql_query = \
	SELECT id, group_id, UNIX_TIMESTAMP(date_added) AS date_added, \
		title, content \
	FROM documents

8.1.12. sql_query_range

Query which fetches min/max document IDs range to be used in ranged query (see Section 3.6, “Ranged queries”).

This option is optional. This option only applies to mysql and pgsql source types.

Example:
sql_query_range = SELECT MIN(id),MAX(id) FROM documents

8.1.13. sql_range_step

How much records to index per one ranged query step (see Section 3.6, “Ranged queries”).

This option is optional. Default value is 1024. This option only applies to mysql and pgsql source types.

Example:
sql_range_step = 1000

8.1.14. sql_group_column

Integer attribute column declaration. Specified column should be present among those fetched by Section 8.1.11, “sql_query”.

There might be multiple attributes specified.

This option is optional. This option only applies to mysql and pgsql source types.

Example:
sql_group_column = group_id    # declare 1st attribute
sql_group_column = author_id   # declare 2nd attribute

8.1.15. sql_date_column

UNIX timestamp attribute column declaration. Specified column should be present among those fetched by Section 8.1.11, “sql_query”.

There might be multiple attributes specified.

This option is optional. This option only applies to mysql and pgsql source types.

Example:
sql_date_column = added_ts

8.1.16. sql_str2ordinal_column

Ordinal string number attribute column declaration. Specified column should be present among those fetched by Section 8.1.11, “sql_query”.

When indexing such attributes, string values are fetched from database, stored, sorted and then replaced by ordinal number (integer) in the sorted strings array. These integers could then be used when searching to sort by by string values lexicographically.

WARNING, all such string values are going to be stored in RAM while indexing!

WARNING, "C" locale will be used when sorting!

There might be multiple attributes specified.

This option is optional. This option only applies to mysql and pgsql source types.

Example:
sql_str2ordinal_column = author_name

8.1.17. sql_query_post

Post-fetch query, executed immediately after main fetch query (Section 8.1.11, “sql_query”) ends. If this query produces errors, they are reported as warnings, but indexing is NOT terminated. It's result set is ignored.

Note that indexing is NOT completed at the point when post-query gets executed, and further indexing might fail.

This option is optional. This option only applies to mysql and pgsql source types.

Example:
sql_query_post = DROP TABLE my_tmp_table

8.1.18. sql_query_post_index

Post-index query, executed when indexing is succesfully completed. If this query produces errors, they are reported as warnings, but indexing is NOT terminated. It's result set is ignored.

In this query, you can use $maxid macro, expanded to max document ID which was actually fetched from the database during indexing.

This option is optional. This option only applies to mysql and pgsql source types.

Example:
sql_query_post_index = REPLACE INTO counters ( id, val ) \
    VALUES ( 'max_indexed_id', $maxid )

8.1.19. sql_query_info

Document info query. Only used by CLI search to fetch and display document information; and only intended for debugging purposes.

This query fetches info to be displayed by CLI search utility by document ID. Therefore, it must contain $id macro.

This option is optional. This option only applies to mysql and pgsql source types.

Example:
sql_query_info = SELECT * FROM documents WHERE id=$id

8.1.20. xmlpipe_command

Command which will be executed in xmlpipe mode to obtain documents. See Section 3.7, “XMLpipe data source” for output format description.

This option is mandatory. This option only applies to xmlpipe source type.

Example:
xmlpipe_command = cat /home/sphinx/test.xml

8.2. searchd program configuration options

8.2.1. seamless_rotate

This feature prevents short periods of searchd being inaccessible while rotating indexes with huge attribute and/or dictionary files (.spa and .spi respectively).

Normally, rotating an index works as follows:

  1. new queries are temporarly rejected (with "retry" error code);
  2. searchd waits for all currently running queries to finish;
  3. old index is deallocated and its files are renamed;
  4. new index files are renamed and required RAM is allocated;
  5. new index attribute and dictionary data is preloaded to RAM;
  6. searchd resumes serving queries from new index.

However, if there's a lot of attribute or dictionary data, then preloading step could take noticeble time - up to several minutes in case of preloading 1-5+ GB files.

With seamless rotate enabled, rotating works as follows:

  1. new index RAM storage is allocated;
  2. new index attribute and dictionary data is asynchronously preloaded to RAM;
  3. on success, old index is deallocated and both indexes' files are renamed;
  4. on failure, new index is deallocated;
  5. at any given moment, queries are served either from old or new index copy.

Seamless rotate comes at the cost of higher peak memory usage during the rotation (because both old and new copies of .spa and .spd data need to be in RAM while preloading new copy). Average usage stays the same.

This option is optional. Default values is 1, which means to enable seamless rotate.

Example:
seamless_rotate = 1

A. Sphinx revision history

A.1. Version 0.9.7, 02 apr 2007

  • added support for sql_str2ordinal_column
  • added support for upto 5 sort-by attrs (in extended sorting mode)
  • added support for separate groups sorting clause (in group-by mode)
  • added support for on-the-fly attribute updates (PRE-ALPHA; will change heavily; use for preliminary testing ONLY)
  • added support for zero/NULL attributes
  • added support for 0.9.7 features to SphinxSE
  • added support for n-grams (alpha, 1-grams only for now)
  • added support for warnings reported to client
  • added support for exclude-filters
  • added support for prefix and infix indexing (see max_prefix_len, max_infix_len)
  • added @* syntax to reset current field to query language
  • added removal of duplicate entries in query index order
  • added PHP API workarounds for PHP signed/unsigned braindamage
  • added locks to avoid two concurrent indexers working on same index
  • added check for existing attributes vs. docinfo=none case
  • improved groupby code a lot (better precision, and upto 25x times faster in extreme cases)
  • improved error handling and reporting
  • improved handling of broken indexes (reports error instead of hanging/crashing)
  • improved mmap() limits for attributes and wordlists (now able to map over 4 GB on x64 and over 2 GB on x32 where possible)
  • improved malloc() pressure in head daemon (search time should not degrade with time any more)
  • improved test.php command line options
  • improved error reporting (distributed query, broken index etc issues now reported to client)
  • changed default network packet size to be 8M, added extra checks
  • fixed division by zero in BM25 on 1-document collections (in extended matching mode)
  • fixed .spl files getting unlinked
  • fixed crash in schema compatibility test
  • fixed UTF-8 Russian stemmer
  • fixed requested matches count when querying distributed agents
  • fixed signed vs. unsigned issues everywhere (ranged queries, CLI search output, and obtaining docid)
  • fixed potential crashes vs. negative query offsets
  • fixed 0-match docs vs. extended mode vs. stats
  • fixed group/timestamp filters being ignored if querying from older clients
  • fixed docs to mention pgsql source type
  • fixed issues with explicit '&' in extended matching mode
  • fixed wrong assertion in SBCS encoder
  • fixed crashes with no-attribute indexes after rotate

A.2. Version 0.9.7-RC2, 15 dec 2006

  • added support for extended matching mode (query language)
  • added support for extended sorting mode (sorting clauses)
  • added support for SBCS excerpts
  • added mmap()ing for attributes and wordlist (improves search time, speeds up fork() greatly)
  • fixed attribute name handling to be case insensitive
  • fixed default compiler options to simplify post-mortem debugging (added -g, removed -fomit-frame-pointer)
  • fixed rare memory leak
  • fixed "hello hello" queries in "match phrase" mode
  • fixed issue with excerpts, texts and overlong queries
  • fixed logging multiple index name (no longer tokenized)
  • fixed trailing stopword not flushed from tokenizer
  • fixed boolean evaluation
  • fixed pidfile being wrongly unlink()ed on bind() failure
  • fixed --with-mysql-includes/libs (they conflicted with well-known paths)
  • fixes for 64-bit platforms

A.3. Version 0.9.7-RC1, 26 oct 2006

  • added alpha index merging code
  • added an option to decrease max_matches per-query
  • added an option to specify IP address for searchd to listen on
  • added support for unlimited amount of configured sources and indexes
  • added support for group-by queries
  • added support for /2 range modifier in charset_table
  • added support for arbitrary amount of document attributes
  • added logging filter count and index name
  • added --with-debug option to configure to compile in debug mode
  • added -DNDEBUG when compiling in default mode
  • improved search time (added doclist size hints, in-memory wordlist cache, and used VLB coding everywhere)
  • improved (refactored) SQL driver code (adding new drivers should be very easy now)
  • improved exceprts generation
  • fixed issue with empty sources and ranged queries
  • fixed querying purely remote distributed indexes
  • fixed suffix length check in English stemmer in some cases
  • fixed UTF-8 decoder for codes over U+20000 (for CJK)
  • fixed UTF-8 encoder for 3-byte sequences (for CJK)
  • fixed overshort (less than min_word_len) words prepended to next field
  • fixed source connection order (indexer does not connect to all sources at once now)
  • fixed line numbering in config parser
  • fixed some issues with index rotation

A.4. Version 0.9.6, 24 jul 2006

  • added support for empty indexes
  • added support for multiple sql_query_pre/post/post_index
  • fixed timestamp ranges filter in "match any" mode
  • fixed configure issues with --without-mysql and --with-pgsql options
  • fixed building on Solaris 9

A.5. Version 0.9.6-RC1, 26 jun 2006

  • added boolean queries support (experimental, beta version)
  • added simple file-based query cache (experimental, beta version)
  • added storage engine for MySQL 5.0 and 5.1 (experimental, beta version)
  • added GNU style configure script
  • added new searchd protocol (all binary, and should be backwards compatible)
  • added distributed searching support to searchd
  • added PostgreSQL driver
  • added excerpts generation
  • added min_word_len option to index
  • added max_matches option to searchd, removed hardcoded MAX_MATCHES limit
  • added initial documentation, and a working example.sql
  • added support for multiple sources per index
  • added soundex support
  • added group ID ranges support
  • added --stdin command-line option to search utility
  • added --noprogress option to indexer
  • added --index option to search
  • fixed UTF-8 decoder (3-byte codepoints did not work)
  • fixed PHP API to handle big result sets faster
  • fixed config parser to handle empty values properly
  • fixed redundant time(NULL) calls in time-segments mode