I would like to measure the effectiveness of ScalaBridge London, but this raises some tricky questions. For example, what does it even mean for ScalaBridge to be effective? I'm not sure. I'm writing this largely as a way to organise my thoughts, and share with others who may be able to help.
The first point to address might be why bother measuring effectiveness. There are three answers to this. First is that I enjoy doing research, and it's an excuse to stay in touch with academia. The second is that I'm putting a lot of effort into ScalaBridge and it would be nice to know if that effort is being used effectively. The third is that ScalaBridge is only one of many similar organisations (I believe Code First Girls is the largest in the UK) that are, as a whole, using significant resources. I believe these organisations are generally understudied, and it would be useful for the wider community to start studying and improving these kinds of efforts.
The first step to measuring effectiveness is defining the goal. This is where things become interesting. Our goal is to increase diversity within the Scala community. We believe we can achieve that by teaching Scala to people who are underrepresented within the current community, but there are at least a few points to note here:
Ultimately what I think this boils down to is: ScalaBridge needs to meet the goals of its members and potential members, but I am not certain what those goals are. Once we understand their goals we can start to see if we're meeting them.
It seems the first step should be to find out our members' goals. We ask them this when they sign up but we don't get a lot of information. I'm also not convinced that people are able to accurately articulate their goals if they are asked. Regardless I'm sure we can do better here. This is where I think a qualitative approach is most useful, and where we can draw on ideas from sociology.
Once we know what we should be doing we are in a position to assess how well we are doing it. Assuming learning Scala is on the agenda, testing students might seem the obvious assessment. However I'm not a fan of this. Given our students have vastly different backgrounds, and we have a number of different streams, we'd need a number of tests to accurately measure progress. Furthermore, I don't think our students would be keen to do this kind of formal assessment and I'm certainly not keen to do a pile of marking. Finally, this kind of assessment only captures part of what we should be doing. It tells us if someone knows aspects of Scala, but it doesn't tell us if they feel part of the community.
What I think may work is asking students for their self-reported assessment of changes along lines we're interested in: For example, do they feel more confident using the language, or do they feel comfortable asking for help in person or on our online forums?
Working out the details will be quite an exercise. Luckily I'll have some help: a number of ScalaBridge students have experience in doing what I want to do and have offered to help. (Thanks! You're all awesome!)