It’s a long shot, but I think if you took DeepPanel (see github), and instead of training it on comic book panels, you set up a training dataset with PDF tables, it would generate the same kind of masks/heatmaps it generates for comic book panels, but for PDF tables (this gives you an image that represents where “table lines” are, and that removes all text and other random stuff, allowing you to process only the table lines).
Then from there, you could scan the image vertically first, doing an average of the pixel of each line of the heatmap to detect where “lines” are, and cut the table into rows. Then once you have the rows, you do the same on each row to get the columns/cell.
I do this for comic book panels and it works very well, I see no reason why it wouldn’t work for PDF tables.
It’s a lot of work but I’m fairly certain it’d work.
Then once you have the cells, it’s just a matter of OCR (you could even maybe try llava for that, I suspect it might work).
Tell me if you need help with this/more details about how I did it for comic books/how I would do it for PDF tables.
I’d really like a version of llava that can process comic/manga pages (read the text, say which character is saying what, doing what, in what order. pretty much turn the manga into a novel or something like that).
Anyone know of any project that is going in that direction/working on that?