I finally went to using an index solar for doing fast requests while my table will contains more than 4 million entries which must be parsed fastly and without consuming a lot of memory.
Here’s I my solution maybe someone will have same problem as me.
public List<Synonym> completeSynonym(String query) {
    List<Synonym> filteredSynonyms = new ArrayList<Synonym>();
    // ResultSet result;
    // SolrQuery solrQ=new SolrQuery();
    String sUrl = "http://......solr/synonym_core";
    SolrServer solr = new HttpSolrServer(sUrl);
    ModifiableSolrParams parameters = new ModifiableSolrParams();
    parameters.set("q", "*:*"); // query everything
    parameters.set("fl", "id,synonym");// send back just the id
                                                //and synonym values
    parameters.set("wt", "json");// this in json format
    parameters.set("fq", "synonym:\"" + query+"\"~0"); //my conditions
    QueryResponse response;
    try {
        if (query.length() > 1) {
            response = solr.query(parameters);
            SolrDocumentList dl = response.getResults();
            for (int i = 0; i < dl.size(); i++) {
                Synonym s = new Synonym();
                s.setSynonym_id((int) dl.get(i).getFieldValue("id"));
                s.setSynonymName(dl.get(i).getFieldValue("synonym")
                        .toString());
                filteredSynonyms.add(s);
            }
        }
    } catch (SolrServerException e1) {
        // TODO Auto-generated catch block
        e1.printStackTrace();
    }
    return filteredSynonyms;
}
6
solved Primefaces Autocomplete from huge database not acting fast