I'm trying to develop a system that can change my string into a unique integral value, meaning say for example the word "account" has an encrypted numerical value of 0891 and no other word can possibly be converted to 0891 with the same conversion process, it does not however need to be able to be converted back the generated integer to string.
At the same time it will be dependent on the word structure rules, meaning words such as "accuracy" and "announcement" will have a generated number greater than 0891 and words such as "a", "abacus" and "abbreviation" will have a generated number less than 0891.
The purpose of this application is to serve similar to an index or primary key. The reason why I'm not using an increment index is for security purposes and is due to the indexes dependency to the number of data in the set
(e.g.)
[0] A, [1] B, [2] C, [3] D, [4] E, [5] F
The above letters has each corresponding index, E has the index of 4
However if the data is suddenly increased or decreased then sorted
[0] A, [1] AA, [2] AAB, [3] C, [4] D, [5] DA, [6] DZ, [7] E, [8] F
E now has the index of 7
Each word must have a unique independent integral equivalent and has the corresponding weights.
I need to know if there exist an algorithm that can do the above.
Any help will be appreciated.
Yes, but mostly no.
Yes as in Stochastically's answer. By setting up a base 26 (or base 128 for all ASCII), you could theoretically hash each string uniquely.
On the other hand, this is impractical, not only would numbers get too big for most languages, but also this would likely be an incredibly consuming process. Furthermore, if strings are allowed to be infinite, then a form of Cantor's diagonal argument can be applied also "breaking" this algorithm. It is impossible to create a one-to-one mapping of a set with cardinality aleph-one (strings) to a set of cardinality aleph-null (ints).
For simplicity, I'll assume
a
toz
are the only characters allowed in words.Let's assign numbers up to length 2 strings:
Now, by just looking at that, you should be able to appreciate that, to determine the offset of any given shorter-length string, you'd need the maximum length allowed. Let's assume we know this number.
For algorithmic simplicity, we would prefer to start at 27: (feel free to try to figure it out for starting from 0, you'll need some special cases)
So, essentially, the left-most character contributes a value
27*(1-26)
(for a-z) and the next character to the right, if one exists, contributes1-26
(for a-z) to the value for a string.Now this can be generalized to say that the left-most number would contribute
(1-26)*27^(len-1)
, the next(1-26)*27^(len-2)
, and so on, until(1-26)*27^0
.Which leads me to some Java code:
Test output:
Online demo.
Yes, those are some reasonably big numbers for just up to length 13 strings, but, without sequentially assigning numbers to words in an actual dictionary, you can't do any better (except that you can start at 0, which is, relatively speaking, a small difference), since there are that many possibilities of letter sequences.
Assign a unique prime value to each alphabet in increasing order(order not necessary).
Please Note : As multiplication of prime numbers is a unique result which can only be multiplied by these numbers, it will give you unique values for each word.
Algorithm :
prime - An array to store prime values corresponding to each
powered to (length - 1) to give value to the place at which this character occurs to maintain a dictionary order.
This algorithm will give sufficiently large values that will overrun your array.
Also : words will smaller lengths may give lower values than some words with larger length and it may affect your dictionary order but I'm not sure why do you want a dictionary order as the uniqueness will be maintained here.
This is not possible with the constraints you have given, unless you impose a maximum length.
Assume that
k("a")
andk("b")
are the codes of these two strings.With your constraints, you are looking for a unique integer number that falls inbetween these two values, but
k("a") < k("a....a") < k("b")
. As there is an infinite number of strings of style"a....a"
(and"akjhdsfkjhs"
) that would need to fit inbetween the two codes, such an order preserving general, unique, fixed-length code cannot exist for strings of arbitrary length. Because you would need as many integers as strings, and since strings are not bounded by length this cannot work.Drop either general (so don't allow inserting new strings), unique (allow collissions - e.g. use the first four letters as code!), the unbounded length (to e.g. 3 characters) or the order-preserving property.
If you don't have any limit on the number of bytes that these integers can occupy, then the underlying (e.g. Ascii) byte codes for each character will give you an integer representation. Equivalently, assign 0=A, 1=B up to Z=25 and then the word itself is the integer in base 26.
You can do this:
Enjoy!