As part of a larger personal project I'm working on, I'm attempting to separate out inline dates from a variety of text sources.
For example, I have a large list of strings (that usually take the form of English sentences or statements) that take a variety of forms:
Central design committee session Tuesday 10/22 6:30 pm
Th 9/19 LAB: Serial encoding (Section 2.2)
There will be another one on December 15th for those who are unable to make it today.
Workbook 3 (Minimum Wage): due Wednesday 9/18 11:59pm
He will be flying in Sept. 15th.
While these dates are in-line with natural text, none of them are in specifically natural language forms themselves (e.g., there's no "The meeting will be two weeks from tomorrow"—it's all explicit).
As someone who doesn't have too much experience with this kind of processing, what would be the best place to begin? I've looked into things like the dateutil.parser
module and parsedatetime, but those seem to be for after you've isolated the date.
Because of this, is there any good way to extract the date and the extraneous text
input: Th 9/19 LAB: Serial encoding (Section 2.2)
output: ['Th 9/19', 'LAB: Serial encoding (Section 2.2)']
or something similar? It seems like this sort of processing is done by applications like Gmail and Apple Mail, but is it possible to implement in Python?
You can use the dateutil module's
parse
method with thefuzzy
option.If you can identify the segments that actually contain the date information, parsing them can be fairly simple with parsedatetime. There are a few things to consider though namely that your dates don't have years and you should pick a locale.
It doesn't always work perfectly when you have extraneous text.
Honestly, this seems like the kind of problem that would be simple enough to parse for particular formats and pick the most likely out of each sentence. Beyond that, it would be a decent machine learning problem.
Hi I'm not sure bellow approach is machine learning but you may try it:
extract all tokens with separator white-space and should get something like this:
process them with rule-sets e.g subsisting from weekdays and/or variations of components forming time and mark them e.g. '%d:%dpm', '%d am', '%d/%d', '%d/ %d' etc. may means time. Note that it may have compositions e.g. "12 / 31" is 3gram ('12','/','31') should be one token "12/31" of interest.
"see" what tokens are around marked tokens like "9:45pm" e.g ('Th",'9/19','9:45pm') is 3gram formed from "interesting" tokens and apply rules about it that may determine meaning.
process for more specific analysis for example if have 31/12 so 31 > 12 means d/m, or vice verse, but if have 12/12 m,d will be available only in context build from text and/or outside.
Cheers
I was also looking for a solution to this and couldn't find any, so a friend and I built a tool to do this. I thought I would come back and share incase others found it helpful.
datefinder -- find and extract dates inside text
I am surprised that there is no mention of SUTime and dateparser's search_dates method.
Although I have tried other modules like dateutil, datefinder and natty (couldn't get duckling to work with python), this two seem to give the most promising results.
The results from SUTime are more reliable and it's clear from the above code snippet. However, the SUTime fails in some basic scenarios like parsing a text
or
It gives no result for the first text and only gives month and year for the second text. This is however handled quite well in the search_dates method. search_dates method is more aggressive and will give all possible dates related to any words in the input text.
I haven't yet found a way to parse the text strictly for dates in search_methods. If I could find a way to do that, it'll be my first choice over SUTime and I would also make sure to update this answer if I find it.