In a previous post I said that the study of Shakespeare was well outside the bounds of philosophy as it is practiced, though it easily have been inside. This was a mistake. Klaas Kraay pointed out to me that there is even an upcoming conference on Shakespeare: The Philosopher.
I’m very happy to have been proven wrong about this. Shakespeare’s connection to philosophy seems like a rich and interesting field of study, and I’m thrilled to see people working on it.
It is a little interesting that the conference doesn’t look like it is growing out of work in history of modern philosophy, or even history of Renaissance philosophy, but out of aesthetics. That wasn’t what I expected either, though perhaps I should have. I suspect in general there are interesting connections to be drawn between the work of the leading poets, playwrights and, eventually, novelists. I wonder if we’ll think of work looking at those connections as being part of aesthetics, or part of history of philosophy? Either way, it’s wonderful to see this kind of work being done.
Posted by Brian Weatherson at 11:05 pm
No Comments »
I’ve been thinking a bit about the ways in which Higher-Order Evidence cases might be like Pascal’s Wager. In each case, an agent is presented with a reason for changing their doxastic state that isn’t in the form of evidence for or against the propositions in question.
Since most philosophers don’t think that highly of Pascal’s Wager, this isn’t the most flattering comparison. Indeed, some will think that if the cases are analogous, then the discussion of higher-order evidence isn’t really part of epistemology at all. Even if Pascal had given us a prudential reason to believe in God, he wouldn’t have given us an epistemic reason. I suspect, though, that this is a touch too quick. There are a variety of Pascal like cases where it isn’t so clear we have left epistemology behind.
Melati and Cinta are offered epistemic deals by demons. Here is the deal that Melati is offered.
There is this proposition p that you know to be true. I have a method M1 that will yield great knowledge about subjects of great interest. It is perfectly reliable. The only catch is that to use the method, you first have to firmly believe that p is false. If you do, you’ll get lots of knowledge about other things, indeed you’ll learn over 100 things that are of similar interest and importance to p.
And here is the deal that Cinta is offered.
Here are 100 propositions that you believe to be true. As you know, most people are not that reliable about the subject matters of those propositions. I can’t say whether you’re better or worse than average, though your accuracy rate is comfortably above 50%. Here’s what I can say. I have a method M2 that will yield very reliable beliefs about these subjects. People who have used it are 99% reliable when they use it. And given the subject matter, that’s a very high success rate. The only catch is that to use M2, you have to start by doubting every one of those propositions, and then only believe them if M2 says to do so.
There are two big parallels between Melati’s and Cinta’s deals. Both of them are asked to change their attitudes because that is necessary for commencing to use a method. At some level, they are asked to change their beliefs on prudential grounds. But note the payoff is not Pascalian salvation; it is knowledge. And the payoff is pretty similar in the two cases; probably around 100 pieces of new knowledge, and 1 false belief.
Yet despite those parallels, the cases feel very very different. Melati has no epistemic reason to believe that p is false. Indeed, it isn’t clear that she has all things considered reason to believe that p is false. And if she’s anything like me, she wouldn’t be capable of accepting the deal. (Carrie Jenkins, Selim Berker, Hillary Greaves and several others have discussed versions of what I’m calling Melati’s case, and the intuition that Melati has no epistemic reason to accept the deal seems incredibly widespread.)
Cinta’s situation is quite different. After all, the deal that the demon offers Cinta is very similar to the deal that Descartes offered his readers. Doubt a lot of things, including some things that you surely know, apply my method, and you’ll end up in a better position than where you started. In Descartes’s case, it wasn’t clear he was able to keep up his end of the bargain. That is, it wasn’t clear that he really had the magic method he claimed to have. But if he did have such a method, it wouldn’t be clear he was offering a bad deal. Moreover, we teach Descartes inside epistemology. If Cinta is being offered a version of Descartes’s deal, then it is arguable that she really has an epistemic reason to accept the deal.
What interests me about the cases of Melati and Cinta is that they suggest a way to capture the asymmetry in intuitions about higher-order evidence. Many people think that higher-order evidence can be good grounds to lose a belief. But I’ve never seen a case where the natural intuition is that higher-order evidence gives the agent grounds to adopt a belief where the first-order evidence is insufficient. Here’s a hypothesis that explains that. Higher-order evidence should be grouped in with things like Descartes’s motivation for doubting all one’s prior beliefs, if not with Pascal’s motivation for belief in God. And it is plausible that these kind of considerations in terms of epistemic consequences can provide reasons, perhaps even epistemic reasons, to lose a prior belief, without providing reasons to adopt a previously unheld belief.
Posted by Brian Weatherson at 10:58 pm
No Comments »
My colleague Maria Lasonen-Aarnio’s great paper Higher-Order Evidence and the Limits of Defeat is just out in PPR. I agree with her conclusions about higher-order evidence. Indeed, on a number of points I agree with her because I’ve been convinced by her arguments. But I did want to register one quibble, one that I don’t think undermines the position she ultimately adopts. In fact, it offers a way to respond to an objection.
Lasonen-Aarnio is interested in cases with the following structure:
1. S has evidence E, and let’s assume S knows she has E, and E is in fact excellent evidence for p.
2. S has strong but misleading evidence, call it H, that E is not any kind of evidence for p.
3. S has no other evidence that tells in favour of p.
There is a widespread intuition in these cases that S should not believe p, because H undermines the support that E provides for p. Lasonen-Aarnio wants to argue against this intuition. Or, at least, she wants to argue against this intuition given a ‘rule-based’ conception of epistemic rationality. The difference between the two possible conclusions will turn out to matter for what I say.
By a ‘rule-based’ conception of epistemic rationality, I mean (and I think Lasonen-Aarnio means too), a theory with the following two principles:
1. All epistemic norms are to be explained by the existence of epistemic rules.
2. For any one of these rules that explain epistemic norms, there is a distinction between following the rule, and merely complying with it, and full rationality requires following not merely complying.
Lasonen-Aarnio’s first step is to argue that even if you accept the intuition here, you should still think that rules like “Believe what the evidence supports” are good rules. It’s true that the intuition in question implies that the rule is somehow trumped. But that doesn’t mean we should have a more restrictive rule saying “Believe what the evidence supports, unless one of these conditions obtain”, where the conditions are the conditions where the rule is trumped. The reason for this is that the intuition that started us down this track is completely general. Any time an agent gets evidence that a rule is unreliable or untrustworthy, the intuition says that rule is trumped. So no finite rule can accommodate all the possible trumping conditions.
One possibility would be to have an infinite rule. It isn’t hard to describe what such a rule would look like. Consider the function from possible situations to belief states that are permissible in that situation. (Ignore the possibly serious problem that such there are too many situations for this to even be a function.) Call this an über-rule of rationality. Such a rule could cover all cases, including the one S faces in our example at the start.
But there is a pressing problem that Lasonen-Aarnio raises here. The reasons for thinking that no finite rule will capture the intuition are reasons for thinking that the über-rule will be infinitely complex. And that in turn means it will be hard for agents to genuinely follow the rule. Put another way, once we start considering über-rules, the distinction between following a rule and merely complying with it will be obliterated, since the most a non-Godlike agent could do is merely comply with it. But once this distinction is obliterated, we have abandoned the rule-based conception of epistemic rationality. After all, any theory whatsoever can be rephrased as a rule-based theory, provided we let rules be nothing more than functions from situations to evaluations of actions in that situation, and the most we expect of agents is complaince with that ‘rule’. The rule-based conception wasn’t meant to be this trivial!
One natural move here, one taken by David Christensen, is to say that agents like S face a dilemma. S should believe p, in virtue of 1, and should not believe p, in virtue of 2. But Christensen, like others moved by the intuition we started with, doesn’t think this is a ‘pure’ dilemma. Rather, he thinks that although S will do something irrational whatever she does, it is worse for her to hold on to her belief in p than to scrap it. Lasonen-Aarnio argues against this possibility, and it is here I want to quibble with her.
As Lasonen-Aarnio sets things up, S faces two rules.
- Rule 1 – With evidence E, believe p!
- Rule 2 – With evidence that E is not good evidence for p, don’t believe p!
But these rules don’t imply that it is better to comply with Rule 2 than with Rule 1. That needs further explanation.
Lasonen-Aarnio assumes that the further explanation will consist of a third rule, one that instructs agents to comply with Rule 2 rather than Rule 1 when given a choice. And on the strong conception of a rule-based conception of epistemology that we started with, she has to be right. Moreover, I think the intuition she’s contesting, the one that says S should give up the belief in p, really doesn’t look that plausible without this very strong conception of epistemology as rule-based.
If we drop this way of thinking about the rule-based conception of epistemology, there is a way out. Let’s say there is an extra normative fact that it is better to comply with Rule 2 rather than Rule 1. (Or, for that matter, that it is better to comply with Rule 1 than Rule 2. I’m really most interested here in the general issue of whether there can be impure dilemmas, not which way this particular one should be resolved.) That normative fact is not explained by, or grounded in, the existence of an epistemic rule. It’s true that the normative fact can be converted into a rule, in much the way that we generated the über-rule above. But that conversion would be misleading, for the distinction between following and complying is not intuitively relevant here.
This is a very general point about impure dilemmas. It is very plausible that the moral worth of an action depends on the action being done for the right reasons. If we’re sympathetic to moral rules, we’ll say that this means that the action is done by a person following, and not merely complying with, the moral rule. Following Nomy Arpaly and Julia Markovits, I don’t think it is important that the agent recognise the rule as a moral rule, or perhaps even as any kind of rule. But that doesn’t undermine the importance of the following/complying distinction, since an agent can genuinely follow a rule without thinking of it as a moral rule. An agent might refuse to lie out of respect for her interlocutors, even if she mistakenly believes (perhaps on general consequentialist grounds) that there is nothing particularly wrong with lying. In that case I think she is genuinely following, not merely complying with, a rule against lying.
In cases of moral dilemmas, there will be some undefeated moral rule that the agent breaks no matter what she does. In some of these, there will be a less bad choice to make. An agent faced with a choice between lying and breaking a promise might do wrong either way, but it may be worse to break the promise. (This isn’t because promise-breaking is always worse than lying; I’m just making an assumption about the particular case.) Now we could say that in such a situation there is a rule saying it is worse to break the rule against promise-breaking than the rule against lying. But this would be a misleading way of speaking, since intuitively there isn’t a difference between following and merely complying with this meta-rule. Or, if there is a difference, it is that ‘mere’ compliance is better. Someone who keeps the promise out of respect for the promisee is merely complying with the meta-rule. But that’s better than either of the obvious ways of following the meta-rule. Someone who lies rather than breaks a promise because it maximises their moral value is excessively self-concerned. And someone who thinks that they should minimise how disrespectful they are to others is treating the person they are lying to too much as a means. It’s better to simply follow the rule against promise-breaking, and comply with the meta-rule. And that’s to say the meta-rule isn’t really a rule in the relevant sense; it’s simply a normative fact that in this circumstance, one action is worse than another.
The same story holds, without the attendant moralising, in the epistemic case. It could be perfectly rational to merely comply with a meta-rule, while following the underlying rules. That position allows for impure epistemic dilemmas, at the cost of giving up a fully rule-based conception of epistemology. I don’t think that’s in conflict with what Lasonen-Aarnio says. Indeed, it helps her overall project I think.
On first reading her paper, one might worry that Lasonen-Aarnio’s arguments overgeneralised; that if they worked they would show that there was no such thing as an impure dilemma. And that seems like an implausible result, at least to me. But in fact she doesn’t ‘prove’ any such thing. Rather, her conclusion is that the thoroughly rule-based conception of epistemology is incompatible with impure dilemmas. Since the best versions of normative internalism in epistemology seem to end up committed to a thoroughly rule-based conception, and to impure dilemmas, her argument is (as she says) a strong argument for normative externalism in epistemology.
Posted by Brian Weatherson at 7:00 pm
No Comments »
I’ve been thinking a bit about the arbitrariness of the boundaries around philosophy. This is part of my general concern with trying to think historically (or sociologically) about contemporary philosophy.
I think it’s beyond dispute that there are, as a matter of fact, some boundaries. For instance, work in sports analytics isn’t part of philosophy. I wouldn’t publish a straightforward study in sports analytics in a philosophy journal, and I wouldn’t hire someone to an open philosophy position if all their work was in sports analytics. And I think just about everyone in the profession shares these dispositions.
In saying all this, there are a number of things I’m not saying.
1. I’m not saying that philosophy is irrelevant to sports analytics. Indeed, some of the biggest debates in sports analytics have been influenced by familiar epistemological arguments.
2. I’m not saying that sports analytics is irrelevant to philosophy. If someone wanted to use a case study from recent debates in sports analytics to make a point in social epistemology, that could be great philosophy. (I’m sort of tempted to write such a paper myself.) But something can be relevant to philosophy without being philosophy. (As a corrollary to that, I’m not saying that there couldn’t be any point to a course on sports analytics in a philosophy department. Perhaps if it was a great case study, more philosophers would need to learn the background to the case.)
3. I’m not saying there could not be something like philosophy of sports analytics. I don’t know what such a thing would be – it feels like it reduces to familiar applied epistemology – but someone could try it.
4. I’m not saying work in sports analytics is no good. Indeed, I think some of it is great.
5. I’m not saying sports analytics doesn’t belong in the academy. As a matter of fact, there isn’t anywhere it happily lives. But if David Romer and others succeed in making it part of economics, or Brayden King makes it part of management studies, I’ll be really happy.
6. And I’m not saying there is some special thing that philosophy timelessly or essentially is that excludes sports analytics. Indeed, the rest of this post is going to be sort of an argument against this view.
But even with all those negative points made, I think it is still pretty clear that philosophy as it is currently constituted does actually exclude sports analytics.
That’s all background to a couple of questions I would be interested in hearing people’s thoughts about.
1. What are the most closely related pairs of fields you know about such that one of the pair is in philosophy (in the above sense), and the other is not?
2. What fields are most distant from philosophy as it is currently practiced, but you think could easily have been in philosophy in a different history?
My answers are below the fold.
Read the rest of this entry »
Posted by Brian Weatherson at 7:00 pm
No Comments »
I’m currently reading about Higher Order Evidence, starting with David Christensen’s important paper on the subject. The literature includes a lot of cases, of which this one from David is fairly indicative.
I’m a medical resident who diagnoses patients and prescribes appropriate treatment. After diagnosing a particular patient’s condition and prescribing certain medications, I’m informed by a nurse that I’ve been awake for 36 hours. Knowing what I do about people’s propensities to make cognitive errors when sleep-deprived (or perhaps even knowing my own poor diagnostic track-record under such circumstances), I reduce my confidence in my diagnosis and prescription, pending a careful recheck of my thinking.
The higher-order evidence (HOE) here is that the narrator (let’s call him DC, to avoid confusion with the philosopher) knows he has been awake 36 hours, and people in that state tend to make mistakes. Here are three interesting features of this case.
1. The natural way to take the HOE into account is to lower one’s confidence in the target proposition.
2. The natural way to take the HOE into account is to take actions that are less decisive.
3. The HOE suggests that the agent is less good at reasoning about the target field than he thought he was.
If one includes the peer disagreement literature as giving us cases of HOE (as David does), then the literature includes a lot of case studies, thought experiments, intuition pumps and the like.
To the best of my knowledge, all the published cases have these three features. Does anyone know of any exceptions? If so, could you leave a comment, or email me about them? I’d be particularly interested in hearing from people who have presented cases that don’t have these features – I’d like to credit you!
To give you a sense of how we might have examples of HOE without these features, consider these three cases. In all cases, I want to stipulate that the agent initially makes the optimal judgment on her evidence, so the HOE is misleading.
A is a hospital resident, with a patient in extreme pain. She is fairly confident that the patient has disease X, but thinks an alternative diagnosis of Y is also plausible. The treatment for X would relieve the pain quickly, but would be disasterous if the patient actually has Y. Her judgment is that, although this will involve more suffering for the patient, they should run one more test to rule out Y before starting treatment. A is then told that she has been on duty for 14 hours, and a recent study showed that residents on duty for between 12 and 16 hours are quite systematically too cautious in their diagnoses. What should A believe/do?
B is a member of a group that has to make a decision. The correct decision turns on whether p is true. The other members of the group are sure it is true, B is sure it is not true. B believes, on the basis of a long history with the group, that they are just as good at getting to the truth as she is, and they have no salient evidence she lacks. The norms of the group are that if all but one person in the group is sure of something, and the other is uncertain, they will act as if it is true, but if the one remaining person is sure it is false, they will keep on discussing things. B is very committed to the norm that she should tell the group the truth about her beliefs, so if she reacts to the peer disagreement by becoming uncertain about p, she will say that, and the group will act as if p, while if she remains steadfast, the group will continue deliberating. What should B believe?
C has just read a book putting forward a surprising new theory about a much studied historical event. (This was inspired by a book suggesting JFK was killed by a shot fired by a Secret Service agent, though the rest of the example relies on stipulations that go beyond the case.) The author’s evidence is stronger than C suspected, and she finds it surprisingly compelling. But she also knows the author will have left out facts that undermine her case, and that it would be surprising if no one else had developed this theory earlier. So her overall credence in the author’s theory is about 0.1, though she acknowledges a feeling that the case feels more compelling than this. C then gets evidence that she may have been infected with a drug that makes people much more sensitive to the strengths and weaknesses of evidence than usual. (This isn’t true; C wasn’t infected, though she has good grounds to believe she was.) If that’s right, her initial positive reaction to the book, before she qualified it by thinking about all the experts who don’t hold this view, may have been more accurate. What should C believe?
For what it’s worth, I wouldn’t want to rest an argument for my preferred view on HOE on intuitions about these cases. But I would be interested in knowing any discussion of them, or anything like them, in the literature.
Posted by Brian Weatherson at 11:50 pm
2 Comments »
This blog has been going slowly, so I thought the natural thing to do was to start a new one on Tumblr:
The idea is that links, small points, and (more importantly) ideas about the profession will go there, and TAR will be for longer posts, and for research work. But I’ll stop doing links posts here.
On the latter, I’m currently about one-third of the way through writing a book manuscript on normative uncertainty. I’m writing it in MultiMarkdown, so hopefully it will be easy to post things to the web as they are done. With any luck, I’ll have some long posts coming soon from draft chapters.
The short version of the book is that it will take the ideas from Running Risks Morally, and Disagreements, Philosophical and Otherwise, and blend them into a single work on why uncertainty and ignorance about what to do, or what to believe, is much less philosophically significant than many people think.
Posted by Brian Weatherson at 12:43 am
No Comments »
This blog has been getting very quiet, hasn’t it?!
I’m currently writing a book on normative externalism, trying to build up a general theory out of the things I said in Running Risks Morally, and Disagreements, Philosophical and Otherwise. Hopefully I’ll have some draft chapters to post soon. Until then…
- Congratulations to Rohan Sud for winning an Outstanding GSI Award from the University of Michigan’s Rackham Graduate School. Concidentially, Rohan has also just had a paper published in Philosophical Studies, a paper on decision rules that he presented at last year’s BSPC.
- I suspect many of you will know about Samir Chopra’s excellent philosophy blog. He also has an excellent cricket blog, The Cordon, hosted at Cricinfo, and a fascinating looking (I haven’t read it yet), book on cricket, Brave New Pitch. It’s great to see philosophy-cricket overlaps; there should be more of it!
- And there is a new philosophy news site, Daily Nous. I hope it is a great success.
Posted by Brian Weatherson at 8:22 am
No Comments »
Metaphysical Mayhem continues!
Rutgers University will be hosting a five day metaphysics summer school for graduate students, running May 19th-23rd, 2014, and featuring Karen Bennett, Shamik Dasgupta, Laurie Paul, Jonathan Schaffer, and Ted Sider.
All local (NY/NJ area) graduate students are invited to attend.
Non-local graduate students must apply to attend, by sending the following to firstname.lastname@example.org by January 10, 2014:
• A single page cover letter
• A curriculum vitae
• A writing sample on any topic in metaphysics
• A brief letter of recommendation (which need be no more than one paragraph), sent from a professor familiar with your work
Applicants will be notified by February 1, 2014. Housing and possibly some limited financial support will be available for non-local graduate students.
Posted by Brian Weatherson at 3:34 pm
No Comments »
I haven’t updated this for a while, have I? So it’s time for some updates.
Social Epistemology Workshop
Last weekend I was at a workshop on social epistemology at Arche. Miriam Schoenfield presented this great paper. I did a paper that was somewhat derivative of Jennifer Lackey’s work on generative testimony. (Well, perhaps more than somewhat – I’ll post it if I decide I really had anything interesting original to say.) I had to miss some papers so I could come back to America to work. But I did hear two interesting papers by Alvin Goldman and Jennifer Lackey on group belief. And I was wondering if anyone had defended the following idea for how to define the beliefs of a group in terms of the beliefs of the group members.
First, use some kind of credal aggregation function to get a group credence function out of the individual group member credences. This could be arithmetic averaging, or (better) it could be one of the more complicated functions that Ben Levinstein discusses in his thesis. Second, draw out one’s favourite theory of credal reductionism to define group beliefs in terms of group credences. My favourite such theory is interest-relative, and it’s possible that some propositions could be interesting to the group without being interesting to any member of the group, so this view wouldn’t be totally reductive.
This approach seems fairly simple-minded, but it does seem to avoid some of the problems that arise for other views in the literature. Hopefully I’ll get some time to read Christian List and Philip Pettit’s book on Group Agency, and see how the credence-first approach compares to theirs.
Rutgers Young Epistemologist Prize – 2015
This will be the ninth bi-annual Young Epistemologist Prize (YEP) to be awarded. To be eligible, a person must have a Ph.D. obtained by the time of the submission of the paper but not earlier than ten (10) years prior to the date of the conference. Thus, for the Rutgers Epistemology Conference 8-10 May, 2015, the Ph.D. must have been awarded between May 8, 2005, and November 10, 2014.
The author of the prize winning essay will present it at the Rutgers Epistemology Conference and it will be published in Philosophy and Phenomenological Research. The winner of the prize will receive an award of $1,000 plus all travel and lodging expenses connected with attending the conference.
The essay may be in any area of epistemology. It must be limited to 6,000 words (including the footnotes but not the bibliography). Please send two copies of the paper as email attachments in a .pdf format to:
One copy must mask the author’s identity so that it can be evaluated blindly. The second copy must be in a form suitable for publication. The email should have the subject: “YEP Submission.” The email must be sent by 8 pm (EST) on November 10, 2014. The winner of the prize will be announced by February 16, 2015.
By submitting the essay, the author agrees not to submit it to another publication venue prior to February 16, 2015, and agrees i) to present the paper at the Rutgers Epistemology Conference, ii) to have it posted on the conference webpage, and iii) to have it published in Philosophy and Phenomenological Research.
All questions about the Young Epistemologist Prize should be sent to YEP@philosophy.rutgers.edu.
UK Visa Rules
Sadly, it seems relevant to post another reminder about recent changes to UK Visa rules. Since 2012, it is impossible to (successfully) apply for a UK work visa if you have worked in the UK any time in the past 12 months. This will affect a lot of people who have rolling part-time positions in the UK. But what I hadn’t realised is that it is also hurting people moving between full-time jobs in the UK. And that’s a much more serious concern.
So in case you need (or will need) a UK work visa, and your would-be employer hasn’t kept up with all the visa changes that the Lib Dem/Tory government has brought in, it is very important to be aware of this rule.
I very much hope that the rule will be scrapped after the 2015 election; it seems to be causing harm without any obvious benefit. But I don’t think it would be a good idea to plan around that. For one thing, Labor might not win, or at least not win in their own right. (And I think we should act as if the Liberal Democrats are supporting policies that the government they partially constitute has introduced.) For another, new governments are often sadly tardy in fixing mistakes of the past governments, so even a Labor win doesn’t mean things start getting better that week. So even if your current UK visa expires after 2015, I’d start thinking about what you plan to do next, assuming that you can’t apply for another visa without a 12 month gap in employment.
Posted by Brian Weatherson at 10:43 am
No Comments »
Last week I was very lucky to be at the 14th (by my count) Bellingham Summer Philosophy Conference. The organisers, especially Ned Markosian, do such a fantastic job of running a conference. It really should be a role model for other conferences. (And in some places it is.)
There isn’t a wikipedia page for the BSPC yet. I thought about setting one up, but I wasn’t quite sure what to say.
At the conference I presented the latest incarnation of Running Risks Morally. I got really valuable feedback, which will be incorporated into the paper. This incorporation will be made much easier by the fact that I have some lists of the questions that were asked. That was in part because I arranged for this, and in part due to unexpected acts of kindness.
And that got me thinking – it would be great if more conferences arranged for there to be someone at each session who took notes on what was being said. This could be useful to the person who is revising the paper, and useful to the participants who want to look back at what was being said. The job would be a little like the minute-taker that Arche used to have at project meetings. It’s not the most fun job ever, but it’s not impossibly hard. I did it at one session at BSPC, and some people naturally take detailed notes. Even if conference organisers don’t want to raise this into a formal position, I highly encourage anyone who is in a conference session where one of their colleagues or close friends is presenting to take as many notes as they can about what goes on in the session.
Posted by Brian Weatherson at 11:58 am
No Comments »