First post, and rather than asking a question proposing an answer for one I couldn't find an answer for, may help someone else.
The issue was saving file uploads locally, and trying to find a nice way to handle duplicate file names.
Given a filename of filename.ext this can form, this will give the first filename of the form filename-\d+.ext that doesn't already exist
$file = "upload.jpg";
while(is_file($file))
{
preg_match("/\d+\.[^.]*$/", $file, $matches);
if (!$matches)
{
$file = preg_replace('/(\.[^.]*)$/', '-1${1}', $file);
echo $file."<br>";
}
else
{
list($i, $extension) = explode(".",$matches[0]);
$file = preg_replace('/(\d+\.[^.]*)$/', ++$i.".".$extension, $file);
}
}
echo $file;
Hope that might help someone
This algorithm is not scalable. Uploading n files with the same name will cause O(n) behavior in this algorithm, leading to O(n²) total running time, including O(n²) filesystem accesses. That's not pretty for a server app. It also can't be fixed because of how filesystems work.
Better solutions:
Use a database to map filenames back to human-readable names, if necessary.
Best solution is just attach Time Stamp in form of YYYYDDMMHHMMSS , You won't get conflicts throughout your whole life ;) Also its Time complexity is very less. Another thing you can do .. you might skip name check directly and instead with file's name ex. "1.jpg" if you are uploading just attach 1(timestamp).jpg , so that you don't even need to iterate through file system. hope it helps
ex. in PHP
I've made my own solution. Here it is: