I have a CSV file in which I want to do a search and replace on each line to form SQL statements. I have come up with this perl script ...
#!/bin/bash
perl -pi -e "s/(.*),(.*),(.*),(.*),(.*),(.*),(.*),(.*),(.*),(.*),(.*),(.*),(.*),(.*),(.*),(.*),(.*),(.*),(.*),(.*),(.*),(.*),(.*),(.*),(.*),(.*),(.*),(.*)/REPLACE INTO student (ID, SIS_ID, STUDENT_NUM, USER_ID, OTHER_USER_ID) VALUES (REPLACE(uuid(), '-', ''), '\$24', '\$26', '\$2', '\$27');/g" $1
However, on a one line file, this takes about 15 seconds to run. As you can imagine on a file of thousnads of lines, this takes hours.
Is there another way I can write the above that would speed up the search and replace? I'm not married to Perl. I'm using Mac 10.9.5, bash shell.
Try replacing each
(.*)
with this:([^,]+)
I am sure it will reduce the execution time.
Rather than search and replace, I'd probably parse the CSV file and construct the SQL statements line by line. It doesn't look like you're doing anything that needs the text-matching capabilities of a regular expression.
Edit: see my answer on your other, very similar, potentially duplicate post for the "correct" way to do this.