You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.

README.md 4.8KB

4 years ago
1234567891011121314151617181920212223242526272829303132333435363738394041424344454647484950515253545556575859606162636465666768697071727374757677787980818283848586
  1. # GraphRNN: Generating Realistic Graphs with Deep Auto-regressive Model
  2. This repository is the official PyTorch implementation of GraphRNN, a graph generative model using auto-regressive model.
  3. [Jiaxuan You](https://cs.stanford.edu/~jiaxuan/)\*, [Rex Ying](https://cs.stanford.edu/people/rexy/)\*, [Xiang Ren](http://www-bcf.usc.edu/~xiangren/), [William L. Hamilton](https://stanford.edu/~wleif/), [Jure Leskovec](https://cs.stanford.edu/people/jure/index.html), [GraphRNN: Generating Realistic Graphs with Deep Auto-regressive Model](https://arxiv.org/abs/1802.08773) (ICML 2018)
  4. ## Installation
  5. Install PyTorch following the instuctions on the [official website](https://pytorch.org/). The code has been tested over PyTorch 0.2.0 and 0.4.0 versions.
  6. ```bash
  7. conda install pytorch torchvision cuda90 -c pytorch
  8. ```
  9. Then install the other dependencies.
  10. ```bash
  11. pip install -r requirements.txt
  12. ```
  13. ## Test run
  14. ```bash
  15. python main.py
  16. ```
  17. ## Code description
  18. For the GraphRNN model:
  19. `main.py` is the main executable file, and specific arguments are set in `args.py`.
  20. `train.py` includes training iterations and calls `model.py` and `data.py`
  21. `create_graphs.py` is where we prepare target graph datasets.
  22. For baseline models:
  23. * B-A and E-R models are implemented in `baselines/baseline_simple.py`.
  24. * [Kronecker graph model](https://cs.stanford.edu/~jure/pubs/kronecker-jmlr10.pdf) is implemented in the SNAP software, which can be found in `https://github.com/snap-stanford/snap/tree/master/examples/krongen` (for generating Kronecker graphs), and `https://github.com/snap-stanford/snap/tree/master/examples/kronfit` (for learning parameters for the model).
  25. * MMSB is implemented using the EDWARD library (http://edwardlib.org/), and is located in
  26. `baselines`.
  27. * We implemented the DeepGMG model based on the instructions of their [paper](https://arxiv.org/abs/1803.03324) in `main_DeepGMG.py`.
  28. * We implemented the GraphVAE model based on the instructions of their [paper](https://arxiv.org/abs/1802.03480) in `baselines/graphvae`.
  29. Parameter setting:
  30. To adjust the hyper-parameter and input arguments to the model, modify the fields of `args.py`
  31. accordingly.
  32. For example, `args.cuda` controls which GPU is used to train the model, and `args.graph_type`
  33. specifies which dataset is used to train the generative model. See the documentation in `args.py`
  34. for more detailed descriptions of all fields.
  35. ## Outputs
  36. There are several different types of outputs, each saved into a different directory under a path prefix. The path prefix is set at `args.dir_input`. Suppose that this field is set to `./`:
  37. * `./graphs` contains the pickle files of training, test and generated graphs. Each contains a list
  38. of networkx object.
  39. * `./eval_results` contains the evaluation of MMD scores in txt format.
  40. * `./model_save` stores the model checkpoints
  41. * `./nll` saves the log-likelihood for generated graphs as sequences.
  42. * `./figures` is used to save visualizations (see Visualization of graphs section).
  43. ## Evaluation
  44. The evaluation is done in `evaluate.py`, where user can choose which settings to evaluate.
  45. To evaluate how close the generated graphs are to the ground truth set, we use MMD (maximum mean discrepancy) to calculate the divergence between two _sets of distributions_ related to
  46. the ground truth and generated graphs.
  47. Three types of distributions are chosen: degree distribution, clustering coefficient distribution.
  48. Both of which are implemented in `eval/stats.py`, using multiprocessing python
  49. module. One can easily extend the evaluation to compute MMD for other distribution of graphs.
  50. We also compute the orbit counts for each graph, represented as a high-dimensional data point. We then compute the MMD
  51. between the two _sets of sampled points_ using ORCA (see http://www.biolab.si/supp/orca/orca.html) at `eval/orca`.
  52. One first needs to compile ORCA by
  53. ```bash
  54. g++ -O2 -std=c++11 -o orca orca.cpp`
  55. ```
  56. in directory `eval/orca`.
  57. (the binary file already in repo works in Ubuntu).
  58. To evaluate, run
  59. ```bash
  60. python evaluate.py
  61. ```
  62. Arguments specific to evaluation is specified in class
  63. `evaluate.Args_evaluate`. Note that the field `Args_evaluate.dataset_name_all` must only contain
  64. datasets that are already trained, by setting args.graph_type to each of the datasets and running
  65. `python main.py`.
  66. ## Visualization of graphs
  67. The training, testing and generated graphs are saved at 'graphs/'.
  68. One can visualize the generated graph using the function `utils.load_graph_list`, which loads the
  69. list of graphs from the pickle file, and `util.draw_graph_list`, which plots the graph using
  70. networkx.
  71. ## Misc
  72. Jesse Bettencourt and Harris Chan have made a great [slide](https://duvenaud.github.io/learn-discrete/slides/graphrnn.pdf) introducing GraphRNN in Prof. David Duvenaud’s seminar course [Learning Discrete Latent Structure](https://duvenaud.github.io/learn-discrete/).