This commit is contained in:
Zev Averbach
2015-05-26 12:59:12 -04:00
commit 6aaac41f08
9 changed files with 524 additions and 0 deletions

77
h.md Normal file
View File

@@ -0,0 +1,77 @@
# Talk: systems programming as a swiss army knife
# When I go to google.com, kernel code runs for:
* typing in the address
* handling every network packet
* writing history files to disk
* allocating memory
* communicating with the graphics card
You don't have to worry about these things because your operating system deals with it.
# How to call operating system code
## System calls
* interface to your operating system
* write files
# Using systems knowledge to debug your programs
## strace = tracing system calls
* it tells you every single system call a given program makes
* presenter made a 'zine about it! must get.
### e.g. `strace -e open bash`
* that'll tell you whether your bash is reading .bashrc or .bash_profile
## others
* `write` for log files
* `execve` for starting programs
# The case of the French website
* when going to a website, it displays "Welcome to PyCon!"
* when `curl`-ing that same website, it displays, "Bienvenue a PyCon!"
* solved somehow with `ngrep` which discovered an 'accept only English signal of some kind'
## network spying tools
* `ngrep`
* `tcpdump`
* `wireshark`
* `mitmproxy`
## `time` (mystery #1)
* `time python py_file.py` --> tells you how long it took to run
### "What is the program waiting for?"
* `pgrep -f py_file`
* some network-ish stuff appears: "It's waiting for the network!"
# Mystery program #2
* use a Python profiler if `pgrep` shows 99% CPU
# Mystery program #3
* `time python mystery_3.py` -- the first time it ran quickly and used 62% CPU; second time was 10%
* pgrep shows a lot of `write`-related things
* but why does it take different amounts of time
* `dstat` -- tells you how much stuff your OS is writing right now
* the program kept writing even after it said it was done. this is because the OS tells you the program is done, but then caches everything and writes stuff to disk afterwards.
# Awesome tools
* /proc
* all the stuff from above
* dtrace (OS X)
* if you learn about your operating system, you can do a lot more -- across all programming languages
* Recurse Center -- where she learned everything she knows
* @bork = Julia Evans

View File

@@ -0,0 +1,48 @@
# Making Web Development Awesome with Visual Diffing Tools
* http://bit.ly/pycon2015-visual-diffs
* unit tests check for things that might break
* screenshot tests check for things you can't imagine happening
## Screenshot tests
* bring up a test server
* take screenshots in set configurations -- PhantomJS or Selenium
* track pixel changes over time
## Tools for generating screenshots
* CasperJS
* Huxley
* Wraith
* PhantomCSS
* dpxdt
* Hardy aka GhostStory
* Needle
* Cactus
* CSSCritic
* seltest
## Commit your screenshots to git repo!
* creates visual history of the website
* GitHub has a nice image viewer and also supports showing image diffs
## import webbrowser
* dreamy way of building websites
* uses React
## Issues
* either make sure everyone's using the same OS version and PhantomJS v., or.. SauceLabs
## uses
* QA on your releases
...
## When to write a pdiff test?
* if you find yourself reloading your browser over and over again when making changes, or checking for responsiveness, write a unit test instead (with pdiff)

72
pip.md Normal file
View File

@@ -0,0 +1,72 @@
# Central Tenets
* deploying should be as easy as pushing a button
* reverting should also be easy as pushing a button
* bad deployment processes impede development speed and innovation
* they create a negative feedback loop of anxiety and fewer pushes
* good deployment proesses foster development and innovation
* code isn't useful until it's being used
## Hypothetical
* you have a new feature or a bug fix that's in a rush
* deploy: what's the worst that could happen? Disaster.
## Do you have more than one way to
* deploy code across teams?
* deploy code amongst a team?
* deploy a single app?
* having a single process is best, especially in the latter two bullets
## Our old deployment setup
* Debian packaging -- process wasn't maintained, people started wanting a new way to deploy code
* SCP un-versioned tarballs -- miserable failure
* Chef installing packages -- worked for a little while, but Chef deployed every 30 mins whether you liked it or not
* pip installations -- today's talk
## Choose one deployment method and stick with it
### Python-based packaging
* Python & Setuptools to build packages
* PEP 440 versioned packages
* pip to install Python packages
* Jenkins for CI and package building
* Chef for initial deploy and service discovery
* Fabric for gluing the pieces together
### How does it work?
* Github used internally
* Jenkins job runs tests, builds a package, uploads it to an internal PyPI server
* Project-specific Jenkins job triggers generic deployment job
* Generic deployment job queries Chef server via Fabric
* SSH into each node and upgrade the Python package, pip install 'u
* job restarts service which loads new code: kill -HUP <pid>, test the new code
* control flow returns to the project-specific job
* notify monitoring that a deploy happened, include v. and timestamp (HipChat, New Relic)
* rinse and repeat for all nodes
## Key takeaways
* choose one deployment strategy and iterate on it -- team-wide or org-wide
* code can be deployed in parallel
* services should be bounced sequentially
* centralized deployment job makes for easy rollbacks
* other codebases can use similar deployment setup
## Shortcomings
* fixing old code deployemnt is very painful
* strongly coupled with Chef for service discovery- not great with edge cases
* Jenkins not the greatest continuous deployment server
* initial hacky usage of Fabric in early stages
# New vocabulary for me
* "internal PyPi repo"

3
requests.md Normal file
View File

@@ -0,0 +1,3 @@
* you'll have multiple tests for each unit of code
* the object that isn't being tested is a "collaborator"

8
screenshots/test.yaml Normal file
View File

@@ -0,0 +1,8 @@
setup: ./run.py
waitFor:
url: http://localhost:5000/
tests:
- name: screenshot
url: http://localhost:5000/

View File

@@ -0,0 +1,22 @@
# Talk: systems programming as a swiss army knife
# When I go to google.com, kernel code runs for:
* typing in the address
* handling every network packet
* writing history files to disk
* allocating memory
* communicating with the graphics card
You don't have to worry about these things because your operating system deals with it.
# How to call operating system code
## System calls
* interface to your operating system
* write files
# Using systems knowledge to debug your programs

139
test_driven_development.md Normal file
View File

@@ -0,0 +1,139 @@
# let's write an application
* an application with users
* users want certain features
## real-world testing
* users do the testing -- you only find out about problems via the live app
* unavoidable
* expensive
* worth minimizing
## application testing
* for application users' benefit
* realistic: end-to-end
* manual or automated
* automated will miss things
* might want both
* confirm that
* current features work
* new features don't break old ones
* can't handle transformational change
### transformational change
* redo user interface from scratch
* reuse code for new application
* current application tests can't help
#### from applications to libraries
* reusable application code >> library code
* use library to do the above transformational changes
## Let's write libraries
* minimize pure app code, maximize library code
* libraries are the setable foundation that allows for application flexibility
* how do we ensure reliable, stable libraries?
### Library testing
* for application developers
* testing code should call the public API
* manual or automated -- mostly automated
## Developing a library the wrong way
* library that counts work frequency
* bug report: "All for one, one for all." returns the wrong count for "all"
* turns out it's because the inputs aren't being lower()'d before counting
* fix must be to add ".lower()" to the count() function
### be paranoid: don't trust your code or anyone else's
* test that the fix works, manually
* set up and run automated tests after every change
#### BUT how do you know your tests aren't buggy?
* run it against known broken code; it should fail
* if it doesn't fail, your test is useless
#### BUT how do you know whether it's the test or the code that's broken?
* write a docstring about what you were intending to test
#### BUT what if I broke the existing features with this bug fix?
* full test coverage -- tests for all public functions
* this isn't necessarily realistic
* add tests as you go
* book recommendation: `Working Effectively with Legacy Code` -- Michael Feathers
* legacy_code == "code that doesn't have tests"
## From paranoia to process
1. Test your code
1. automatically and repeatedly
1. ensure tests go from failing to passing
1. document your tests' goals
1. Test all your code.
## Test-driven development
1. software has requirements
1. encode requirements in automated tests that initially fail
1. write code to meet the specification
1.
1.
### Bug fixing, the TDD way
1. write test first
1. write docstring for test
1. run test, ensure it fails
1. fix code
1. run test, ensure it passes
### Feature development, the TDD way
1. Exploratory design
1. write detailed requirements
1. for each requirement:
1. Write a test, make sure it fails
* if writing the test is awkward, maybe your new code is awkward
1. Write code, make sure test passes
* test the public API!
### E.g.: Let's implement a feature
* instead of getting word count a word at a time, get it all it once.
* first explore design options
* add a stub function of new feature (just a docstring and placeholder `def`)
#### Write detailed requirements
* functionality: "all_counts() returns dict mapping lower-cased counted words to their count."
* edge case: "If no words have been add()ed, all_counts() returns empty dict."
* ambiguity: "Modifying returned dictionary doesn't modify WordCount's counts.
## Recap: Why test?
* ensure correctness of current requirements
* to prevent breakage when requirements change
* for libraries: to allow transformational change in applications
## Recap: Why test first?
* to ensure all new code has tests
* validate test does what we think it does
* for libraries: to eercise API to see if design makes sense
* good tests allow for massive changes without fear
## Resources
* book in progress: http://itamarst.org/softwaretesting/
* slides: http://itamarst.org/softwaretesting/pycon
## Questions
* how to motivate devs to do TDD? don't incentivize # of lines of code
* what if you forget to use your tests after a while? Use TravisCI or something to test upon commit.

View File

@@ -0,0 +1,113 @@
# usability testing on the cheap
* good usability can make small things feel big
* can also make big things feel manageable (e.g. Heroku)
* usability is expensive, whether outsourced or in-house -- $100/hr. for contractors
# caveat
* presenter isn't a UX person; she's a self-taught dev with a psych degree
* if you can afford a UX person, get one
## when?
* when you get the idea
* before you design
* before you code
* while you're coding
* after 1.0
## testing
* when testing code, test usability at the same time
### when you have an idea
* talk to your customers
* where are they?
* DON'T ASK "would you use this?" -- loaded question; people are polite
* "do you use something like this?"
* a way to find competitors
* what doesn't it do? (the competing app)
* face to face
* outcomes
* is it unique?
* is it desirable?
* who is your competition?
* where are your users and how do you interactive with/communicate to them?
### where does everything go?
* cards
* about 30
* on each card goes an action
* ask them to sort cards into piles
* handwritten, so you can add more cards
* watch them as they're uncertain, changing their mind, etc.
* outcomes
* what is the structure of the site?
* what content is hard to categorize?
### make it pretty
* face to face
* have a neutral party present it
* questions
* initial thoughts?
* does it remind you of anything?
* how would you...? ( do this task, find this piece of information)
* see if they can find x in the IA
* screen AND paper
* watch faces!
* outcomes
* are you sending the right message?
* can people guess what your content is intelligently, without using a search box? ("i bet that would be there")
* more the design appealing
### coding
* very small iterations, then go back and "what do you think now?"
* mock up new ideas first
* outcomes
* are you on the right track?
* did your users have a better idea?
### out of beta
* you're not done talking to your users
* interview them again
* are there any pain points to our system?
* what do you like?
* how do you...?
* don't fight your users, make it easy for them to use it in the weird way that they do
* outcomes
* do your users still want your product?
* what should you drop?
* what should you promote?
* is it time for 2.0?
* are they getting bored with the site?
### accessibility
* don't do it at the end
* speaker wrote a book about it
* Penn State accessibility site
* accessibility suites ---> NO
### questions
* look at competition before developing your product? really?
* yes. look deeply, and you'll see where the market opportunities (weaknesses) are.
* you need to know how to make a case for your product with users of competition.
* how do you know if the less-used features are critical for a few?
* analytics, talking to people
* if this is the case, promote the feature
* how do you find the testers before you build?
* join local user groups, bulletin boards, meetup groups
* start with Reddit
* how to get more feedback from the community once you've found them?
* SurveyMonkey
* look over those questions very carefully
* get a psych person to look over them, if possible
* email
* face to face == best feedback, even from a much smaller pool

42
virtualization.md Normal file
View File

@@ -0,0 +1,42 @@
## The Landscape
* "You will need a few days to get a development environment set up and working."
* "We added a new service; we should really document that."
* "Are you sure you're running the same version of Python?"
* "It works on my machine!"
That's not how things should be; we should fix it.
## What Do We Want?
Dev environments that
* mirror production as much as possible
* are low-cost
* are disposable
* don't suck to develop on
### Why?
* portability -- for new hire: a setup that just works
* consistency
* reusability
* Vagrant == most popular tool
* wraps around other tools: VMWare, virtualbox etc.
## Provisioning tools
* Ansible
* Chef
* Docker
* Puppet
* SaltStack
* tell Vagrant in provisioning that this VM is a development box
## Advantages of virtualization
* different amounts of RAM
* different OSs
* `vagrant share` allows you to share your box with others