Here we describe tools for constructing statistical models from relational data. Our goal is to learn structured probabilistic models that represent statistical correlations both between the properties of an entity and between the properties of related entities. These statistical models can then be used for a variety of tasks including knowledge discovery, exploratory data analysis, data summarization and anomaly detection. Unfortunately, most statistical learning methods work only with "flat" data representations. Thus, to apply these methods, we are forced to convert the data into a flat form, thereby not only losing its compact representation and structure but also potentially introducing statistical skew. These drawbacks severely limit the ability of current statistical methods to model relational databases. Here we describe two complementary approaches: one approach suited to making probabilistic statements about individuals and the second approach suited to making statements about frequencies in relational data. We describe algorithms for both learning and making inferences in these models, and give experimental results.