A customer following error message was shown to me, When I tried the database have symfony to create:
General error: 1709 Index column size too large. The maximum column size is 767 bytes.
This is because, the Strato strange setting in their managed hosting packages such as z.B. has the STRATO PowerWeb. Strato unfortunately won't change this setting, but you can in symfony in the doctrine.yaml (config.YAML) change the charset, Symfony works well on a Strato Server:
# configure these for your database server
Amazon AWS for testing on a single Amazon EC2 instance to install on, can you do the following:
It boosts an EC2 instance, that is not too small, with regard to the RAM, at least a m4.large with 8 GB RAM and 2 Processors, Elasticsearch is already demanding at the store and also Logstash is very resource hungry. As operating system I chose Ubuntu-16 (Ami-1e339e71).
Then you can Elastic IP create the instance, so that you can easily replace the instances and still continue keeping the IP.
If you an AutoComplete with Elasticsearch with realtime “Search-as-you-type” Want to build functionality, Elasticsearch offers a very quick Completion suggest an.
The problem is, that only results can be achieved, you are at the beginning of the string:
A search for “Jackson” the entry has not “Michael Jackson“. Or a search for “Thiller Michael Jackson” does not take “Michael Jackson thriller“.
The solution is: It is simply not possible to do this with the completion suggest, because the algorithm does not support this.
Anyway to get the matching results, I have same query the data for the AutoComplete on, as for the search results generated. While this is moderately worse performance, but the results of the AutoComplete and the search results after you submit match anyway and not iritieren the user.
Elasticsearch makes it not easy to get, It is helpful to understand the following terms.
An analyser calculates the data for the index above and stores the result in the updating of the data once. From this set of tokens, the search can then determine results.
An Analyzer consists of 3 Share, which are applied in the order:
1. Character filter - html_strip: Strip HTML tags and decode HTML entities such as & - mapping: Replaces all occurrences of a string with another - pattern_replace character: Replaced with the help of a regex of each hit by a suitable 2. Tokenizer
- A tokenizer is a calculated string tokens, d.h. individual words and phrases