Welcome to the documentation for Offenes Parlament

An open-data framework for the public data of the Austrian Parliament

Most of this documentation is technical in nature and sketches out implementation details, design patterns and installation/running instructions

Installation/Rollout Instructions

Installation Notes

This section aims to briefly sketch out the process of installing the app for development.

Installation instructions with Vagrant

The suggested method includes Vagrant, a virtualization tool aimed at developers.

Setup
  1. Clone the github repository (duh)
  2. Navigate into the project dir cd OffenesParlament
  3. Setup and run the vagrant VM vagrant up. All requirements will be installed automatically inside the VM which may take a few minutes the first time.
  4. The script might ask you for your password as it will add offenesparlament.vm pointing to this VM to your hosts-file. It also automatically creates a django superuser admin with password admin.
  5. Log in to the running VM with vagrant ssh
  6. For the initial scraping instructions see below
  7. Run the server inside the VM (0.0.0.0 lets the server respond to requests from outside the VM - ie your physical machine where you probably run your browser)
cd offenesparlament
python manage.py runserver 0.0.0.0:8000
  1. If you work on client files that have to be compiled (CSS, JS) you have to run grunt as well. ATM we have the tasks dev and reloading. dev watches and regenerates files when their sources change. (Remember that sources also change when you do a git pull and generated client files aren’t commited to git) And reloading does that and uses Browsersync to reload your browser when files change
cd /vagrant
grunt dev
  1. To exit and shutdown the VM run
exit
vagrant halt

Resetting the database

In case you need to reset the database (delete all migrations, flush the db content, recreate all objects etc.), run these commands in the django project folder ‘offenesparlament’:

bin/clear_db.sh

Creating a Model-Diagram

It’s possible to view the current database model residing in the op_scraper app by calling:

bin/graph_models.sh

A png-image will be generated as ignore/models.png.

Initial scraping

There are currently four available scrapers, which should initially run in this order:

  1. llp (legislative periods)
  2. persons (for instance Rudolf Anschober <http://www.parlament.gv.at/WWER/PAD_00024/index.shtml>)
  3. administration (for instance, ‘Faymann II’ and all the Persons having a mandate for that administration)
  4. pre_laws (for instance Buchhaltungsagenturgesetz, Änderung (513/ME))
  5. laws_initiatives (for instance ÖBIB-Gesetz 2015 (458 d.B.))

To run a scraper, use the following command:

python manage.py scrape crawl <scraper_name>

for instance:

python manage.py scrape crawl persons

The law_initiatives scraper also has an additional parameter to define, which legislative period to scan; per default, it scrapes the periods from XX to XV. This can be overriden this way:

python manage.py scrape crawl -a llp=21 laws_initiatives

to only scrape that period. Careful though: scraping of periods before the 20th legislative period is not possible as of yet (since there are no machine-readable documents available).

ElasticSearch and Re-Indexing

For now, reindexing (or updating the index, for that matter), is only done manually. To have all data indexed, just run:

python manage.py rebuild_index

for a full rebuild (wipes the indices first), or:

python manage.py update_index

to perform a simple update. For this to succeed, make sure ElasticSearch is up and running.

Technical Documentation

General Information

This section outlines the general setup of OffenesParlament.at.

Structure - What is Where?

OffenesParlament.at is a (set of) Django applications. It’s roughly divided into two parts: the Scraping part, which takes care of data aggregation and parsing of the website of the Austrian Parliament, and the Presentation part, which presents the collected data in the form of a searchable web application.

The Scraping part can be found in the subfolder offenesparlament/op_scraper, wheras the Presentation part is largely gathered in the folder offenesparlament/offenesparlament.

The search engine parts transcend each of those projects, with the search views being located in /offenesparlament/offenesparlament/search_views.py, but the search_indexes being located in /offenesparlament/op_scraper/search_indexes.py.

In future, the Email-Subscription service will also be located in both parts, given that we will have to offer a subscription logic in the webapp itself (with a set of views to faciliate that), but also trigger the sending of those emails via the scraper upon changing the database.

Frontend Code

All sources for the frontend code (JS and CSS) are located in client/ and split between scripts and styles. We use CoffeeScript and the React framework for client code. To generate JS and CSS from the sources we use grunt (Gruntfile.coffee is in the root dir).

All generated files are put in offenesparlament/offenesparlament/static/ by grunt.

Scraping: Scrapy Spiders and Extractors

This section describes the scraping setup and processes.

Structure

The scraper is located in the subfolder /offenesparlament/op_scraper. It contains the Django models (cf. /offenesparlament/op_scraper/models.py), some admin views and admin dashboard adaptations, as well as the Austrian Parliament scaper itself, situated at /offenesparlament/op_scraper/scraper/parlament.

A scrapy scraper consists of a set of spiders - a single process capable of scanning, parsing and in injecting data from a website into a database. Currently, the following spiders exist:

  • laws_initiatives: Scrapes the Laws and Government Initiatives as found on this page
  • pre_laws: Scrapes laws that are still in the pre-parliamentary process, as shown on this list
  • llp: Scrapes the list of available legislative periods
  • persons: Scans ‘Parlamentarier’ as found here
  • administrations: A secondary spider that also scans Persons, this time focused on their mandates as part of specific administrations, as shown in here
  • statements: Scrapes Debates and DebateStatements. Requires llp and persons for lookup.
  • Inquiries: Scrapes Anfragen (Inquiries) and Beantwortungen. Requires Persons.

Each of those spiders inherits from BaseSpider (cf. /offenesparlament/op_scraper/scraper/parlament/spiders/__init__.py), which offers some generic methods to be used by different spiders.

Besides the spiders themselves, which handle getting the response from the subsite of parlament.gv.at and creating the django objects based on the scraped data, the Extractors (to be found at /offenesparlament/op_scraper/scraper/parlament/resources/extractors) do the actual heavy lifting of translating the raw html data into meaningful, structured data (mostly in the form of dictionaries and lists) by using XPATH expressions.

Spiders

Spiders are the managing part of the scraping process. At the bare minimum, a spider consists of an Constructor (the __init__ method), which is responsible for populating the self.start_urls list with all the web-adresses to be scanned, as well as a parse method, which gets to be called with the response from each of the entries in the self.start_urls list. Furthermore, each spider must have a member variable name set, which will identify it for the command line calls.

The following is a simple example or code skeleton of a spider:

# -*- coding: utf-8 -*-
from parlament.settings import BASE_HOST
from parlament.spiders import BaseSpider
from parlament.resources.extractors.example import EXAMPLE_EXTRACTOR
from ansicolor import green

from op_scraper.models import ExampleObject


class ExampleSpider(BaseSpider):
    BASE_URL = "{}/{}".format(BASE_HOST, "/WWER/PARL/")

    name = "example"

    def __init__(self, **kw):
        super(ExampleSpider, self).__init__(**kw)

        self.start_urls = [self.BASE_URL]

    def parse(self, response):

        data_sets = EXAMPLE_EXTRACTOR.xt(response)

        for data_set in data_sets:
            item, created = ExampleObject.objects.update_or_create(
                name=data_set['name'],
                defaults=data_set
            )
            item.save()

            if created:
                self.logger.info(u"Created ExampleObject {}".format(
                    green(u'[{}]'.format(data_set['name']))))
            else:
                self.logger.info(u"Updated Legislative Period {}".format(
                    green(u"[{}]".format(data_set['name']))
                ))

Not all database/django objects can be fully extracted through a single page. For instance, the Person objects need to be discovered through one of the abovementioned lists, but their details can only be extracted from a secondary person detail page. To accomodate this, scrapy’s callback functions can be used like this person spider skeleton:

def parse(self, response):

    # Parse person list
    # [...]

    callback_requests = []
    for p in person_list:
        # Create Detail Page request
        req = scrapy.Request(person_detail_page_url,
                             callback=self.parse_person_detail)
        req.meta['person'] = {
            'reversed_name': p['reversed_name'],
            'source_link': p['source_link'],
            'parl_id': parl_id
        }
    callback_requests.append(req)

return callback_requests

def parse_person_detail(self, response):

    person = response.meta['person']

    # Parse Person detail page
    # [...]

In the above example, the spider will start making secondary requests to retrieve the detail pages, and call the parse_person_detail with the responses. As shown above, the request for the secondary page contains a member variable meta that can be used to transfer already created data to the secondary response to continue working with the same person and provide some continuity.

Saving/Updating the models

Currently, the spiders do not need to take care of versioning the changes they scrape; since the page needs to be requested and scraped already to be able to determine if there were any changes, the spiders should simply update existing objects or create new ones where necessary. Since the OffenesParlament.at app also employs django.reversion to version the changes to the database, it can be possible to trace changes to the objects via versions rather than during the scraping process itself, although this is not yet implemented due to the fact that the email-subscription service hasn’t been realized yet.

Keyword parameters

To specify additional (optional) keyword parameters for the spiders, the __init__ method accepts a kw parameter, which contains a dictionary of keys and values supplied from the commandline. For instance, the laws_initiatives spider accepts an additional llp parameter:

python manage.py scrape crawl -a llp=21 laws_initiatives

In the spider itself, this parameter can be extracted like this:

def __init__(self, **kw):
    super(LawsInitiativesSpider, self).__init__(**kw)
    if 'llp' in kw:
        try:
            self.LLP = [int(kw['llp'])]
        except:
            pass
    # [...]
Extractors

Extractors take over the heavy lifting - by translating the raw html source code they are handed into organized data, ready for insertion into the database.

The simplest extractor just inherits from parlament.resources.extractors.SingleExtractor, which provides an xt method and utilizes a simple class variable containing the XPath expression to extract, expecting it to evaluate to exactly one result. For instance, the title of a law detail page might be extracted by the following class:

from parlament.resources.extractors import SingleExtractor

class LAW:
    class TITLE(SingleExtractor):
        XPATH = '//*[@id="inhalt"]/text()'

Similarly, to simply extract a list of items based on an XPath expression, the following code could be used:

class LAW:
    class KEYWORDS(MultiExtractor):
        XPATH = '//*[@id="schlagwortBox"]/ul//li/a/text()'

In reality, many of the extractors overwrite the xt method to implement more complex extractions.

Search Provider: Haystack, Elasticsearch

This section describes the search and indexing implementation.

Basics

The current application relies on Django Haystack, a high-level framework brokering between Django and a search backend. This search backend is currently ElasticSearch, but could be interchanged for Apache SOLR, should the need arise.

Re-Indexing

For now, reindexing (or updating the index, for that matter), is only done manually. To have all data indexed, just run:

python manage.py rebuild_index

for a full rebuild (wipes the indices first), or:

python manage.py update_index

to perform a simple update. For this to succeed, make sure ElasticSearch is up and running.

SearchViews

Searching is split between different contexts, represented by different Django views (cf. offenesparlament/search_views.py):

  1. Main Search (all indices), at /search
  2. Persons, at personen/search
  3. Laws, at gesetze/search
  4. debates, at debatten/search

Each view determines the available facets - for instance, the Person view returns, among others, faceting information for the person’s party in it’s results.

The views all inherit from JsonSearchView, an adaptation of Haystack’s SearchView that, instead of rendering a template, returns JSON data to be processed by the frontend.

Each accepts a query parameter, q, and a list of facet filters, named like the facets available for that view:

Main Search
  • No Facets
Persons
  • party: A person’s party, for instance, SPÖ
  • birthplace: A persons birthplace
  • deathplace: A persons deathplace
  • occupation: A persons occupation
  • llps: The legislative period(s) a person was/is active during
  • ts: The timestamp that entry was last updaten (from the parlament site)
Laws
  • category: A law’s category
  • keywords: A law’s assigned keywords
  • llp: The legislative period of a law
  • ts: The timestamp that entry was last updaten (from the parlament site)
Debates
  • llp: The legislative period a debate fell into
  • debate_type: either NR or BR (Nationalrat/Bundesrat)
  • date: The date the debate happened

Each of the facet filters means each resulting entry must contain the term, but it is not specifiying exact searches; for instance, filtering fields that might contain multiple entries like a person’s active legislative periods, for instance, will return all persons that have the period in question in their list, not just persons whose list contains only the period in question.

The query parameter searches in the index’s text field - an aggregate field containing most of the other fields to allow more specific searches.

All parameters have to be supplied as GET-Parameters. A typical request might look like this:

http://offenesparlament.vm:8000/personen/search?q=Franz&llps=XXIV&party=SP%C3%96

and would return the following JSON data:

{
   "facets":{
      "fields":{
         "party":[
            [
               "SP\u00d6",
               2
            ]
         ],
         "birthplace":[
            [
               " Wien",
               1
            ],
            [
               " Wels",
               1
            ]
         ],
         "llps":[
            [
               "XXIV",
               2
            ],
            [
               "XXIII",
               2
            ],
            [
               "XXV",
               1
            ],
            [
               "XXII",
               1
            ],
            [
               "XXI",
               1
            ],
            [
               "XX",
               1
            ]
         ],
         "deathplace":[
            [
               "",
               2
            ]
         ],
         "occupation":[
            [
               " Kaufmann",
               1
            ],
            [
               " Elektromechaniker",
               1
            ]
         ]
      },
      "dates":{

      },
      "queries":{

      }
   },
   "result":[
      {
         "birthplace":" Wien",
         "party_exact":"SP\u00d6",
         "llps_exact":[
            "XXIV",
            "XXIII",
            "XXII",
            "XXI",
            "XX"
         ],
         "text":"PAD_03599\nFranz Riepl\nRiepl Franz\n Wien\n\n Elektromechaniker",
         "birthdate":"1949-03-23T00:00:00",
         "llps":[
            "XXIV",
            "XXIII",
            "XXII",
            "XXI",
            "XX"
         ],
         "deathdate":null,
         "deathplace":"",
         "full_name":"Franz Riepl",
         "occupation_exact":" Elektromechaniker",
         "party":"SP\u00d6",
         "deathplace_exact":"",
         "birthplace_exact":" Wien",
         "reversed_name":"Riepl Franz",
         "source_link":"http://www.parlament.gv.at/WWER/PAD_03599/index.shtml",
         "occupation":" Elektromechaniker"
      },
      {
         "birthplace":" Wels",
         "party_exact":"SP\u00d6",
         "llps_exact":[
            "XXIV",
            "XXIII",
            "XXV"
         ],
         "text":"PAD_35495\nFranz Kirchgatterer\nKirchgatterer Franz\n Wels\n\n Kaufmann",
         "birthdate":"1953-09-24T00:00:00",
         "llps":[
            "XXIV",
            "XXIII",
            "XXV"
         ],
         "deathdate":null,
         "deathplace":"",
         "full_name":"Franz Kirchgatterer",
         "occupation_exact":" Kaufmann",
         "party":"SP\u00d6",
         "deathplace_exact":"",
         "birthplace_exact":" Wels",
         "reversed_name":"Kirchgatterer Franz",
         "source_link":"http://www.parlament.gv.at/WWER/PAD_35495/index.shtml",
         "occupation":" Kaufmann"
      }
   ]
}
Paging

In addition to the query arguments for filtering and facetting, the search views also automatically limit the results to allow for smooth paging. Two parameters govern this behaviour: offset and limit.

Offset returns search results from the given integer on - so, for a search that produced 100 results, an offset value of ‘20’ would only return results 20 to 100. If no offset value is given, the view assumes ‘0’ and returns results starting with the first one.

Limit restricts the amount of results per page; with the abovementioned example and a limit value of ‘50’, the query would only return results 20 through 70. If no limit is given, the view assumes a default of 50 results. This can be changed in the offenesparlament/constants.py file.

Fieldsets

Given the amount of data in the index (particularly the debate statements), returning the entirety of an object including all of it’s fields is not performant enough for long lists of results. To combat that issue, the concept of predefined fieldsets has been introduced. Each index class now contains a FIELDSET dictionary which defines the available fieldsets. The debate class, for instance, contains the following fieldsets:

FIELDSETS = {
      'all': ['text', 'date', 'title', 'debate_type', 'protocol_url', 'detail_url', 'nr', 'llp', 'statements'],
      'list': ['text', 'date', 'title', 'debate_type', 'protocol_url', 'detail_url', 'nr', 'llp'],
  }

The dictionary key describes the fieldset, and the value consists of a list of all fields that should be returned when requesting that fieldset.

Per default, the search view only returns the ‘list’ fieldset; if a search request must return all available data, the ‘fieldset’ parameter allows querying the for a specific fielset fieldset:

http://offenesparlament.vm:8000/personen/search?parl_id=PAD_65677&fieldset=all

Indices

WARNING: Currently, only three seperate indices exist, one for the Laws, one for the Persons and one for the Debates. These are subject to heavy development in the future and will change a lot still, so this documentation will remain mostly blank for now.

The indices are defined in op_scraper/search_indexes.py. Each index contains a text field, which aggregates the objects’ data into a single, text-based field, which Haystack uses as the default search field. The exact makeup of this field is defined in templates, located at offenesparlament/templates/search/indexes/op_scraper/*_text.html.

Django Admin Interface

The Django Administration Interface has been extended with a few vital functions to make maintainance easier.

Manually trigger scraping

Besides the usual CRUD-Interfaces, which should only be used for debugging purposes, given that the site’s entire data should be automatically scraped from the parliament website, a new block called Scraping Management has been added, which allows manually triggering one of the following spiders:

  1. Legislative Periods
  2. Persons
  3. Administrations
  4. Pre-Laws
  5. Laws

The order of the scrapers above represents their dependencies on each other; for instance, scanning laws includes votes and speeches by Persons, and relies on the Person in question having been scraped before. To be sure, the order above should be maintained in all scraping processes.

Behind the scenes, the scraping view offenesparlament.op_scraper.admin_views.trigger_scrape calls the celery task offenesparlament.op_scraper.tasks.scrape, which in turn finds and executes the requested scraper, but wraps this in a django-reversion block to create new revisions of all affected database objects.

Import-Export

To allow import-export, the import_export django module is being used, overwriting the templates/admin/changelist.html admin template to be compatible with django_reversion.

Frontend

This section describes the frontend code and how to build it

Building

We use grunt to build our frontend code. It is generated from the sources in client/ and is output to offenesparlament/offenesparlament/static/.

We commit all generated files to git.

Grunt is already installed in the vagrant VM, so you should be able to run the build task from the VM right away in the dir /vagrant:

grunt dev

This task watches all source files and rebuilds if necessary.

To use BrowserSync and have the browser reload every time frontend files change, run:

grunt reloading
Grunt too slow?

If grunt is too slow running inside the VM (Probably due to file-watching on the host system) you’ll have to install the following on your computer:

Then you can run grunt tasks on your computer from the project dir OffenesParlament (where the Gruntfile.coffee is located)

Subscriptions via Email

This section describes the subscription structure and implementation.

Basics

An important feature of OffenesParlament.at is the ability to subscribe to certain pages for email-updates, if and when they get changes through a scrape. This includes search views as well as detail views. To maintain a maximum of privacy for the user while also keeping the user’s data and email-adresses safe and spam-free, a verification process is used that provides one-time links for verifying the subscriptions as well as viewing a list of subscriptions for a given user. The following functionalities have been implemented:

  • Subscription of a page via web form
  • Verification email sending, Verification view
  • Subscription List view
  • Subscription List URL reminder email sending
  • Subscription Deletion

The following things are in beta or have yet to be implemented:

  • Frontend Design for views
  • Actual “Changes” email based on search results
  • Frontend/JS code to create proper subscription URLS pointing towards a JSON search result from ElasticSearch
  • Subscription Titles (need to be autogenerated based on search results)

Views and URL-Schemes

The following are the URLs for the new subscription service:

URL Description
/verify/<email>/<key> verify the given email adress for the subscription with the given hash-key
/subscribe subscribe an email adress to a given URL (email and url must be in POST-data)
/unsubscribe/<email>/<key> unsubscribe email-address from subscription with the given hash-key
/list/<email>/<key> list all subscriptions for the given email with the given verification key (for the user/email itself, not for any subscriptions)
/list/<email> re-send subscription list email, with the hash-key for the list view

Email Templates

The emails being sent are saved as *.email templates in offenesparlament/templates/subscription/emails; they follow the normal Django templating language. Variables can be defined and accessed via normal {{ var_name }} statements.

To facilitate the easy use of templates, an Email sending controller has been implemented in offenesparlament.constants.EmailController. It serves as a base class for the actual Email-Template-Classes (cf. for example offenesparlament.constants.EMAIL.VERIFY_SUBSCRIPTION). It provides an easy shortcut to Django’s Email-Sending Module, automatically rendering the assigned template files and sending the email to the requested recipient.

Existing Scrapers

Inquiries

General notes about the structure of the inquiries

  • Inquiries can be submitted by members of the national assembly or federal assembly

  • Inquiries can be directed at members of the Austrian government, the president of the court of auditors, the president of the national assembly or the chairmen of parliamentary committes

  • Generally, inquiries are submitted in written form, with the exception of oral inquiries to the Austrian government

  • Urgent inquiries (dringliche Anfragen) are oral inquiries with a written statement that is presented before the corresponding respondent and are structured in a special way to account for the discussion that is planned after the stakeholders have made their cases.

  • While the structure of the data in the database is completely flat, there is some structure inherent to the parliamentary inquiries that can be exploited in post-processing or front-end:

  • There are multiple types of inquiries, each type has a name and a shorthand. Shorthands are used for referencing and identifying certain inquiries in general communication and URLs.

Type Shorthand
Schriftliche Anfragen an die Bundesregierung J
Schriftliche Anfragen an die Bundesregierung (Bundesrat J-BR
Mündliche Anfragen an die Bundesregierung M
Mündliche Anfragen (Bundesrat) M-BR
Dokumentenanfragen betr. EU an die Bundesregierung JEU
Schriftliche Anfragen an Ausschussvorsitzende JPR
Schriftliche Anfragen an PräsidentInnen des Nationalrats JPR
Schriftliche Anfragen an RechnungshofpräsidentInnen J

Scraper Structure

The scraper starts out using the RSS feed of the inquiry overview site

The function get_urls in inquiries.py iterates over NR/BR and all legislative periods (LLP) to collect the links that will be parsed. The code includes a debugging variable containing assorted links for testing changes on all types of inquiries. get_urls will take approx. 2-3 minutes to record all available links to inquiries, and outputs the number of inquiries to be scraped in the terminal. Full scraping and parsing of all inquiries will take about 1-3 hours, depending on your internet connection and database speed.

In the parser, information is extracted and written into an Inquiry object and foreign keys are attached. If the inquiry is urgent (dringliche Anfrage), a more elaborate steps-parser is called (it is almost the same as the one for laws) than when it is not urgent. When a link to a written response can be found in the inquiry’s history, that link is recorded and handed over to the callback function when the parser terminates. At the end of the function, the Inquiry object is saved and the output indicates which inquiry was created or updated. The functions parse_keywords, parse_docs, parse_steps and parse_parliamentary steps are self-explanatory helper-functions, they are parsers pulled out of the main parser function for clarity.

The callback function for inquiry requests is very similar to the main parser function. 90% of it is merely a simplification of what the parser function does, because lots of restrictions/edge cases don’t apply to the responses of inquiries, e.g. urgent inquiries where the response is always oral. At the end of the inquiry response parser, the created or updated response object is attached to the original inquiry and the inquiry item is again saved to the database.

Model structure

The Inquiry and InquiryResponse models are subclasses of the Law model, inheriting all of its properties and augmented by a few that are unique to inquiries, specifically senders/receivers and responses.

Example Inquiry

Design

Design

Fonts

The font used throughout the design is called Source Sans Pro. It’s the first open source font designed by Adobe. It was published under the SIL Open Font License. The source files can be downloaded via Sourceforge and Github. (Source: Wikipedia)

The font can also be embedded with Google Fonts.

At the beginning, the Roboto font by Google was considered as an alternative. Both fonts – Source Sans Pro and Roboto – have very complete families, from very light to bold font styles.

We went for the Source Sans Pro because of her more distinctive character while being extremly readable. This font choice gives the website a modern look.

We use the font weights Regular and Bold. For emphasizing text, the italics provided with the font can be used. Please take care of using the real italics, not only slanting the regular font with <em>.

Colors

The design comes with a colorscheme that is both colorful and soft. We often have long pages with a lot of text and tables, so the colors needed to work for structuring the site. It should look friendly, not too technical – we want to attract users with all kind of backgrounds, and therefore need the site to look human/welcoming. And last but not least, the colors needed to display the variety of the austrian parties.

The main color (used in the navigation bar throughout the site) is blue. It is also used for the background of the homepage in a slighly lighter hue.

An overview on the colors used on the site (in RGB, the percentages refer to the original color):

  • Text color black 0/0/0
  • Linkcolor blue 0/111/213
  • Linkcolor blue hover im Header and Footer (50%) 127/183/234
  • Blue Header Background 30/69/99
  • Grey Footer (85%) 38/38/38
  • Light Red 245/95/87
  • Red 236/55/51
  • Red 50% for icons in the search bar 245/155/153
  • Grey 50% for icons in the search bar, grey text, etc. 127/127/127
  • Grey text footer (25%) 191/191/191
  • Blue background homepage (95%) 41/78/107
  • Grey search bar (10%) 229/229/229
  • Grey text entry field in search bar (18%) 209/209/209
  • Blue background lightbox 30/69/99 with opacity 0.9
  • Grey lines in tables (15%) 217/217/217

Parties: - ÖVP 65/60/77 - Grüne = green 63/146/91 - SPÖ = red 236/55/51 - FPÖ 94/148/186 - Neos = pink 215/76/107 - Stronach = yellow 243/188/64 - BZÖ = orange 243/120/62

Other: - Green box element (Top10, etc.)

fill color (10%) 235/244/238 border color (25%) 207/228/214
  • Light red box element (subscription form) fill color (5%) 254/247/246 border color (20%) 253/223/221

  • Blue box element (laws) border color and fill color lower part (15%) 221/227/232 fill color upper part (5%) 244/246/247

  • Buttons

    border color light red 245/95/87 and fill color 255/255/255 (text and icon in light red) hover: border and fill color 245/95/87 (text and icon changes to white)

Grid

Text

We have a number of basic text formatting covered, such as headlines, lists, etc …

Tables

Tables always start with a white header row and bold text. All the other rows are alternating a a white and light grey background (5% of blue 30/69/99). The rows are separated by light grey horizontal lines. There are no vertical lines or other elements to reinforce the columns. This way, the tables look simple, yet organised.

Text in tables is smaller than normal paragraph text. Icons can be used as needed. There is a generous padding in the table fields.

Icons

offenesparlament.at is a very text and table-heavy site. While more visualisations are part of our wishlist and might be realized in a next step, we needed smaller visual elements to help our readers and make the content easier to understand. That is where colors and, just as important, icons come in.

We used symbols that are based on the Streamline Icon Set (Line Version). To fit our needs, some icons were edited or created from scratch and added, such as the parliament icon.

Indices and tables

Community

If you’d like to stay in touch, sign up to our mailing list:

https://lists.metalab.at/mailman/listinfo/offenesparlament_at