> What sort of problems are well-suited to graph databases?
If you have a graph, then a graph database, with built-in graph algorithms, will be able to run operations on your graph without pulling all the data out to a client. I'm not an expert in PostgreSQL but I don't think it has any graph algorithms?
My team opted for postgres over a graph db in one of our projects to model a network of devices. Primarily due to the pgRouting extension, which implements many graph algorithms in postgres.
I've implemented graphs and trees in a relational DB quite a few times. For graphs you can just store an adjacency list. I think most of the large DB's now have tree traversal extensions to SQL (I know Oracle does). If you need to manipulate on the server then PL/Sql (or equivalent) can do the job.
The only time I could imagine that you'd need a specialist graph database would be if you had a very large number of nodes and some time / space intensive algorithm to run over them. Even in this case you could just store the data in a relational DB and use a low level interface (e.g. C) to write the specialist algorithm.
Yes, you need a nosql graph databases only for applications that require eventual consistency - and that's a hard thing to get right in any case; an application can do without that if does not have to scale to a very large user base.
In any case putting the graph nodes and links into sql tables is a much easier to do option.
PG has recursive common table expressions, which allow you to traverse a tree or graph represented as an adjacency list in a single query. (No doubt there are other important things that graph DBs do.)
Are the built in graph algorithms useful these days? I tried neo4j a few years back and didnt find anything similar to eg networkx, which is the level of graph/network algorithms I use regularly. Neo4j made life much harder for me at the time, and I had to migrate that project back to postgres.
How does one know if it's time to migrate to a graph database?
Right now I have a large (6.5GB) database that I have to sometimes work with, and the queries are getting more complex as more features are added to the project.
Right now the data is stored in MySQL, but to get anything interesting out of it requires multiple queries and then sifting through the data later. It all seems cumbersome.
It depends on the structure of the DB - 6.5GB is large if its attributes, if its documents then not so much. If its an index of documents then a different problem. Access patterns matter to. Like if its a 6.5 GB database that stores OLTP data, then yes thats large. If its a history of document modifications then probably small. If its historical data for a data cube sort of app, then its pretty small, I'd say tiny - I've worked on ones that get into the TB's.
If you have a graph, then a graph database, with built-in graph algorithms, will be able to run operations on your graph without pulling all the data out to a client. I'm not an expert in PostgreSQL but I don't think it has any graph algorithms?