Most "screen scraping" these days is just extracting content from web pages.
1) Write a program that can load webpages as if it was a user of the site.
2) Have it save everything it loads.
3) Write a program that can extract the data you care about out of the html and put it into a more useful format (or into a database or something)
Many languages have libraries for this or you can use a tool like cURL or wget. I do this a lot with perl and the LWP family of modules but the sites I work on don't use javascript or dom manipulation... There's so much javascript and ajax out there now though that I'm not sure if you can scrape those kinds of sites with perl.
Screen scraping means: you write a web crawler which loads up the web page (in this case, takes your bank login user name and password, puts them in the login form on the bank website, pretends to be you and loads up the relevant web pages). Then you write an html parser which grabs the relevant bits from the bank's web page (account balance, number, name, etc.) and stores those bits somewhere useful in the local database.
"Screen Scraping" seems to be a sort of misnomer here. Essentially they are just loading a URL and extracting the information from whatever is returned. Whether that happens intelligently, or if they are just making specific scrapers for each bank, I have no idea.
But basically, 'look for the number in this div region, this is the account balance'.. etc.