date
stringlengths 10
10
| nb_tokens
int64 60
629k
| text_size
int64 234
1.02M
| content
stringlengths 234
1.02M
|
|---|---|---|---|
2017/03/21
| 1,143
| 4,739
|
<issue_start>username_0: We are two students working in the same field under the supervision of the same professor within his chair and all of our publications are done together.
Hence I am curious: Is it generally possible in academia to submit a PhD thesis with two authors?<issue_comment>username_1: No. The PhD thesis consists of *your* personal contribution to the field and is a demonstration of *your* expertise.
This presented a problem for a pair of fellow PhD students while I was working on my degree (in the sciences, in the United States). They published numerous impactful papers a co-first authors, but thesis committee members had many objections for the student had the misfortune of presenting second. They had to clearly delineate their personal contributions in the thesis itself, where each had to have enough to individually merit the degree. This was a complete headache for both of them and ended up being a significant amount of extra work.
This merits a discussion between you, your advisor, and your colleague before you get any further in the proposal/thesis writing process.
**Edit:** As a few have mentioned, it seems like there are some instances where they are accepted - but the overall trend is that they are very uncommon. Do to the many reasons listed in the multiple responses, they are not recommended in general even if allowed by your university. In any case, you both would still need to prove each of your personal contributions are deserving of a PhD.
Upvotes: 4 <issue_comment>username_2: It is definitely possible in the sense of "It has been done." However, the only example I know of is this thesis from linguistics: <http://dare.uva.nl/search?arno.record.id=123669> (Groenendijk & Stokhof 1984)
Whether it is possible in the sense of "You could do it." will depend on your university's regulations concerning these matters, as well as your advisors and committee's preferences. They are the people you should talk to about this.
Upvotes: 4 <issue_comment>username_3: It happened to me to a lesser extent: a small part of my PhD research was in collaboration with another PhD student in another continent. When we were about to write our thesis (more less at same time) our adviser had to come to a mutual agreement on what part would be in my thesis and what part to my colleague's thesis. Even if it was not against the rules to include the same work, it definitely wasn't advisable to do so to avoid any potential awkward situation in the future.
Even if the adviser agrees, and even if it's technically not against university rules it could potentially raise many red flags in the future for someone that catches the situation (and I think it will be very easy for someone to spot it). I would *strongly* advise not to.
Upvotes: 4 <issue_comment>username_4: Yes, it is technically possible if your advisor and university agree, which is a big if. In addition to the example from linguistics mentioned in the [previous answer](https://academia.stackexchange.com/a/86798), there is the Krohn-Rhodes theorem, which appeared in the identical dissertations of Krohn and Rhodes. See, for example, [page viii of "Applications of Automata Theory and Algebra" by <NAME>](https://books.google.com/books?id=0ukzw5VszNwC&pg=PR8).
However, you shouldn't even think about trying this yourself. It's vanishingly rare (I'd guess somewhere between one in ten thousand and one in a hundred thousand dissertations), and unless your thesis is absolutely amazing, this will immediately become by far its most attention-getting aspect. That's really not what you want people to focus on while you are trying to establish your reputation.
Upvotes: 4 <issue_comment>username_5: A few answers have noted that PhD theses with two authors do exist, and that is interesting trivia.
However, I think it is useful to address your underlying issue. I.e.,
**what is the best strategy to adopt when you are collaborating closely with another PhD student, you are working on a similar topic, and you have the same supervisor?**
This situation arises for many PhD students (I can think of a few), yet they still work out a way to write their own thesis.
If you are publishing joint papers, then you may want to think about ways that you and the other student can be the lead author on different papers. You probably also want to formalise the description of who made what contributions to each paper. I know at my institution, papers that form part of a PhD need to have a statement signed by all co-authors listing the contribution of each author.
You want to think about how your thesis can be distinct. You should work with your supervisor to carve out your unique focus.
Upvotes: 5 [selected_answer]
|
2017/03/21
| 1,399
| 6,176
|
<issue_start>username_0: I am currently an undergraduate medical student studying in South Africa. In future I would like to do academic research but I'm not sure in what field. But I would like to get a head start now as an undergraduate. My exposure to research is very limited in the current curriculum. The most we do are research reports but that just entails compiling information from existing literature, we do not come up with our own ideas. As an undergraduate, I don't have a lot of technical or in-depth clinical knowledge so that limits me at this stage in what I can pursue. My university does not have any active undergraduate research programs. How can I start to pursue research and what fields are within my scope? I have heard of research assistants but am unsure of who to approach and what skills they need from me.<issue_comment>username_1: I am assuming that by "proper" research you mean writing publications. I am not in the field of medicine, but to my knowledge, exposure to current research is generally low in Undergraduate degrees. They tend to teach the basics- which are written in introductory books, not state-of-the-art journal articles. Politics students starting out, wishing to learn about how Trump got elected, will have to start by learning about the US party system, the electoral system, the role of the presidency, election campaigns, and so on.
I think that the best way to start research is to be a good and active undergraduate student. You are learning the things that prepare you for your postgraduate studies and eventually beyond. You say you don't want to summarise literature, but come up with your own ideas. You also say that you don't know what interests you yet. You can't do one without the other. By reading the literature and the standard material, you gain an insight into what some of the ongoing issues and discussions are. By reading up on those issues further, you will eventually end up where current research is going. Literature reviews are necessary- I remember my 60+ established Professor and PhD supervisor groaning about the literature review he had to carry out for a paper he was working on. This does not stop.
That said, I think there are plenty of things you can do to get involved early and develop your interests. A few things come to my mind:
1. Read journal articles! Your university should be subscribed to many journals in your field- and if not, there are still plenty of open-access journals that you can read. Reading articles can give you an idea of the current research and might give you ideas about what you find interesting to pursue further.
2. Speak to members of staff and ask for potential research assistantships. They come at different levels- some require near-PhD experience, others involve boring data-collection and lab work that requires little training. RA jobs are a good way of exposure to ongoing research: while the work you do may be boring, you are still involved in an ongoing project and depending on the field and the type of RA job, you may even be listed as an author.
3. Find student-led journals that publish well-written courswork or projects. Some universities have student journals that you can submit your coursework to- usually it has to be of a high standard. Submissions are reviewed and edits will be suggested- this is a good way of learning the publishing process in a friendly environment.
4. In the case that you have not begun your dissertation or final year project yet: Come up with some ideas and present them to a member of staff. Ask them what they think about the viability of pursuing the topic beyond your undergraduate. They may point you in the right direction.
Again, I am not working in medicine, so there may be opportunities specific to the field. You may find more relevant answers [here](https://academia.stackexchange.com/questions/81495/important-skills-for-medical-researcher?).
Upvotes: 1 <issue_comment>username_2: Although I am not versed in how undergraduate research is viewed in South Africa, I will answer from a U.S. perspective.
Your best bet as an undergraduate is to approach professors who are doing work that interests you. It is okay to not have specific research experience in a field: you have to start someplace, and professors will know this. There may be jobs posted for undergraduate students, or you could ask professors you have taken courses with if they have room in their lab or if they know another professor who is looking for undergraduate help. Tell them what you are able to offer: you want research experience, you are interested in topic X, you can commit Y hours per week, you took a lab course in Z. Be prepared for and not discouraged by rejection, not everyone will have time or space for new undergraduate.
If you are most interested in *clinical* research, your opportunities may be more limited but the basic approach is the same: approach physician-scientists who do work that interests you.
Sometimes (again, this is in the U.S.) there are credit-based positions available for undergraduates where you will do some work and get credit rather than pay for the time you put in.
In general, as an undergraduate, my opinion is that your experience in a lab should be a hybrid between the work you do for the lab and the knowledge you gain. The work you do might be menial at first: washing dishes, preparing solutions, caring for research animals or cell lines - these tasks take minimal training, so you can be useful immediately. However, you should also be involved in technical conversations, you should be reading new literature in your field (and discussing papers with your colleagues), and over time you should be learning more advanced or specialized techniques.
If you are able to spend more than a year or so in a lab, you should expect to be involved in a real project that could potentially lead to a publication, though it will probably not be your own project - that's okay, you will still be learning skills that apply to your future research endeavors (or, alternatively get you enough experience to learn that you don't enjoy a particular approach).
Upvotes: 2
|
2017/03/21
| 2,169
| 9,516
|
<issue_start>username_0: If a college professor has a clear rubric for their expectations for an essay, is it unethical -- or even illegal -- to grade certain students harder than others?
A few students I know are in a college English course. They are producing better essays than other students, but have recently been making C's instead of A's because, as the teacher says, they are "held to a higher standard" than other students.
Is there some action that can be taken to remedy this issue? They aren't out for blood, just the grade they have worked for.
Edit: Thank you all so much for the replies! For clarity: we are residing in the U.S. and the professor himself sent an email explaining that their writing is exceptional and as such he will hold them to a higher standard than their peers.<issue_comment>username_1: Questions about law are off topic here, so I will interpret this question in terms of the culture of academia. In academic culture, this is the sort of decision that is left to the individual professor's discretion. The professor feels that some students have different needs from other students, the professor can adjust their grading to meet those needs. This would fall under the principle of academic freedom.
Is this professor's grading approach ethical? Is it effective? I don't think we can judge these questions without knowing what is happening in great detail. That's one of the reasons why professors have academic freedom.
To answer the later parts of your question, if students are unhappy about the professor's grading policies, they should politely and thoughtfully share their concerns with the professor. They should also listen carefully to the professor's reasons for his policy. Students should also read the syllabus and see what it says; the syllabus is often considered a binding contract.
Upvotes: 2 <issue_comment>username_2: Potential discrimination issues aside (that is, assuming the professor truly has everyone's best interests in mind and merely wishes to motivate particular students), have you considered that the professor may not rely strictly on individual assignment grades when determining a final grade?
That is, a professor could grade high-performing students strictly to keep them focused and engaged, but adjust their final grade to reflect their performance versus peers. One could argue whether that approach is good pedagogy, but I doubt it is unethical: different students have different needs.
I think there is a lot that is unknown about these circumstances from your question. I would advise students that are concerned about their grading relative to their peers to address the professor directly and politely: with curiosity rather than accusation, and assuming best intentions from the start. There should be no need to escalate the issue unless there are clear indications of bias based on protected categories or other impropriety.
Upvotes: 4 <issue_comment>username_3: It is legitimate if, in a single course, there are two *"classes"* of students taking the course for different forms of credit (such as a course with undergrads taking the course for undergrad credit and grad students taking the course for grad credit, or majors taking a course with students majoring in something else), it is legit for the prof to deliberately and transparently hold the different *"classes"* of students to different standards. This should be made clear to students registering for the course and should be made clear on the first day and on the course syllabus.
The **best** and most legitimate way to make this distinction is, even if the course is taught together with the same prof, is for the different *"classes"* of students to register for, what appear to be different courses with different course numbers in the school catalog or schedule. So seniors would be registering for **ECON 458** and grad students would be registering for **ECON 558**. The *"two"* courses happen to be about the same topic, meet in the same room at the same times, are are taught by the same prof. But that prof can assign assignments to the students expecting grad-school credit that is not assigned to the undergrads. And that prof can apply a stricter measure of performance to the grad students.
Other than that, the same standards should be applied to every student throughout the course.
Upvotes: 4 <issue_comment>username_4: If the grades are readjusted for final course grade, it may make sense.
Suppose there is a subset of students who are turning in solid A essays by the overall class standard. One way of giving them feedback on which essays are better or worse would be to spread out their grades over a wider range.
If that is what is happening, it might be better to take the time to write a note saying what is good about a given essay and what could be improved.
Upvotes: 1 <issue_comment>username_5: Response, inspired by a comment of @Insulin69:
Indeed. As long as the interface seen outside are only the grades, there is no way for the student to demonstrate their prowess by the quality of their work.
Such a grading system is highly counterproductive and pseudo-pedagogical. It is ok to make higher grades difficult to reach: but, if so, it should be applied across the board aiming at a consistent standard across the class and - ideally - across cohorts of different years of the same class.
Upvotes: 2 <issue_comment>username_6: Does the professor have the right to require students to *improve* during the class? If so, yes, he has the right to grade the student more strictly. If, however, he is a hired educator and evaluator with a responsibility to *accurately report* the student's level of ability, then no, he does not have the right.
When deciding your answer to the above, it may help to consider the opposite scenario. One student is *not* skilled at writing, due to lack of ability or quality education. However, he works hard, is sincere about asking for feedback and responding to it. At the end of the term, his writing is *still* of below-average quality. Is it ethical for the professor to give, as the saying goes, an "A for effort"?
The answer is not simple, and it cuts to the core of "what is a grade?" I sympathize with the idea that a grade is the professor's way of giving feedback. However, grades used as a determining factor in career prospects, and for this reason I believe a professor has a moral obligation to grade objectively.
Upvotes: 2 <issue_comment>username_7: You didn't ask this, but it's possible that such behavior could violate *your school's grading policy*, regardless of the ethical and legal concerns. The central problem is that grades absolutely do matter to students, and they have a reasonable expectation to understand how they will be graded.
For example, the first google result that came up for me states:
*Instructors are obligated to evaluate each student's work fairly and **without bias** and to assign grades based on valid academic criteria.* **(my emphasis)**
<https://clas.uiowa.edu/faculty/teaching-policies-resources-grading-system-and-distribution>
To be frank, there's a lot of fuzzy and subjective reasoning that goes into grading, and it sounds like this professor's subject might incorporate more of this than usual. Actually finding an administrator who was willing to do anything beyond having a talk with your professor (i.e. formal sanctions) is going to be nearly impossible. Typically the notion of "bias" in grading policies refers to grading some students easier or harder because of discriminatory characteristics like race, gender, or sexual orientation.
Also, your professor's grading might be based on perfectly objective and fair criteria that is not the overall quality of the student's work. For example, a workshop class whose purpose is to refine student's writing style or skill is going to focus heavily on the revision process. The point here might not be to produce masterful essays, but to critically examine and improve your own work. A great essay with minimal evidence of revision is not going to score well in that kind of situation.
Upvotes: 2 <issue_comment>username_8: Using the specific individuals performance as reference standard for that individual is a good idea to encourage improvement, but it is a really bad idea in this case as the grades are important later on. After all, there won't be an asterisk on the diploma explaining the C is actually an A. So the students should be measured by the same criteria as the other students. Measuring students with different yardsticks is bad practice. It punishes good students in the job market and makes it appear as if the professor is afraid of a ceiling effect.
If the professor still wants to encourage improvement, there are a couple of better options. E.g.,
* Give the A and explain in writing below the grade that the students aced the course, but it's due to the level of the course and they are capable of much more. Then explain how they can improve.
* Offer a letter of recommendation if the students excel beyond what is usually needed for an A.
* Increase the distance between a C and a B, and even more between an B and an A for all students. It will still allow mediocre students to get their C's and D's, but getting B's and A's will be much more difficult — for all students. This had to be done for all future courses, otherwise your grades are influenced by which kinds of students take the course.
* Offer an advanced course.
Upvotes: 2
|
2017/03/21
| 402
| 1,826
|
<issue_start>username_0: In some of the questions regarding author name, it is mentioned that people can use whatever they like, however, they must be consistent across their publications.
When it comes to being hired by a university for a tenure track position or by another principal investigator as a researcher, how can someone prove that they are the author of papers when their author name does not exactly match the one on their ID card? How can a principal investigator be sure that the publication list does actually belong to the person claiming them?<issue_comment>username_1: This isn't ordinarily a source of fraud at least in Europe and the USA, though I have noticed some people not curating their Google Scholar pages very carefully, giving them fraudulently high h-indices. But the best way to prove authorship is to get an ORCID, especially if you have a common name. Many journals now require ORCID anyway. And do make and curate a Google Scholar page -- don't let it add papers without your approval!
I look at these sources when I'm hiring not so much to make sure that the papers exist as to check if the papers are any good and who cites them. But if the papers didn't exist at all I'd probably notice.
Upvotes: 4 <issue_comment>username_2: When it comes to hiring a faculty member, you can expect that members of the search committee will look at a candidate's papers. Since the published paper typically shows the affiliation of the author and the past affiliations of a candidate will appear on the CV, it would generally be obvious if a paper was authored by someone other than the candidate with the same or similar name.
Of course, a candidate could lie about his/her past affiliations, but we have letters of recommendation and reference phone calls to ensure that doesn't happen.
Upvotes: 5
|
2017/03/22
| 1,051
| 4,336
|
<issue_start>username_0: I recently create a new google sites-powered webpage for my academic page (after seeing discussions at [this question](https://academia.stackexchange.com/questions/26130) and others).
However, being a brand new page, it won't come up at all if you search for my name (even with qualifiers such as 'math' or my institution).
What steps should I take to ensure that others can find my website?
I am specifically looking for answers that apply to academic pages, as I've heard that they may be indexed in a different way from normal webpages.<issue_comment>username_1: The usual approach would be to create links to the website from other prominent indexed websites. So for example, you might want to create a link to the page from your university profile page. You could also add links from ResearchGate, LinkedIn and other social networks that allow you to share a link to a personal website. You could start creating content and sharing it on social networks.
Then, you just have to wait a bit for the web crawlers (particularly Google) to find and index you. This might take a month or more.
If you are specifically interested in being indexed by Google Scholar, check out [these guidelines](https://scholar.google.com/intl/en/scholar/inclusion.html):
>
> Individual Authors
>
>
> If you're an individual author, it works best to simply upload your
> paper to your website, e.g., www.example.edu/~professor/jpdr2009.pdf;
> and add a link to it on your publications page, such as
> www.example.edu/~professor/publications.html. Make sure that:
>
>
> the full text of your paper is in a PDF file that ends with ".pdf",
> the title of the paper appears in a large font on top of the first
> page, the authors of the paper are listed right below the title on a
> separate line, and there's a bibliography section titled, e.g.,
> "References" or "Bibliography" at the end. That's it! Our search
> robots should normally find your paper and include it in Google
> Scholar within several weeks.
>
>
> If it doesn't work, you could either (1) read more detailed technical
> guidelines in this documentation or (2) check if your local
> institutional repository is already configured for indexing in Google
> Scholar, and upload your papers there.
>
>
>
Upvotes: 3 [selected_answer]<issue_comment>username_2: You don't have to do anything, it takes a couple of weeks for search engines to index your website. There is no difference between academic website and normal website in the way they are indexed.
To ensure that others can find your website, do awesome research and publish excellent papers. Otherwise, no matter how many SEO techniques you do, nobody will care.
Upvotes: 1 <issue_comment>username_3: Often times, new websites will find themselves in the "Google Sandbox."
This means that you won't see your website showing up in search results for upwards of six months (as some studies have suggested). However, do not be dismayed by this. Keep adding content, Google will still see it ... and rank your site accordingly when the time comes.
As it were, if you are interested in moving away from the Google-site, and to WordPress (which is MUCH better), there is a pretty good guide here: [Building Your Personal Academic Website](https://bradcongelio.com/your-personal-academic-website/)
Upvotes: 1 <issue_comment>username_4: Since this question by the OP has come back to current search results, I will say that the better approach is to use your own domain name to be found.
Part of SEO (Search Engine Optimization) practices is to put links from social media or other reputable websites to your website. Nevertheless, when searched directly for your practices or name, you have control on your own website how you want to be found.
There are a number of ways on how you can manipulate search engines to make your site come up above other websites and pretty fast actually. Even if you link yourself from an academia page to your original website, this will hierarchically be higher on the scale than the person being found in-between pages of another page. But again, there are rules to be followed such as meta-tags, getting your website approved by Google, going to the Google console and submitting a request to be indexed by robots and other SEO practices.
Upvotes: 1
|
2017/03/22
| 449
| 1,990
|
<issue_start>username_0: I plan on starting my PhD in a few years. Is there any particular time that I should register for an ORCID identifier? Can I do it now (still an undergraduate student), or should I wait for acceptance to a PhD program? Or, am I overcomplicating this and the time of registration doesn't matter?
This may seem like a basic question, but I couldn't find any information on [the website](https://orcid.org).<issue_comment>username_1: PhD is not the requirement to get ORCID identifier. You will need it mostly when you are ready to submit your research work in journals. In my area of research (Computer Engineering) most of the journal has made mandatory to have ORCID identifier.
Upvotes: 4 [selected_answer]<issue_comment>username_2: It's largely up to you when you create an ORCID identifier or even if you you do. Basically, it's a way to trace all of your future publications back to a unique identifier which is arguably very useful considering the massive number of articles available online and the likelihood of authors with similar names.
You can even register and ORCID ID after publishing and retroactively add your existing publications. However, it will be easier to keep track of them as you publish them. I created an ORCID ID after attending a library seminar recommending them at the beginning of my postgraduate studies, it didn't take very long to set up, and wasn't an issue sitting idle before I before submitted publications.
Another consideration is that some journals (e.g., [BioMed Central](https://www.biomedcentral.com/) journals) integrate with an ORCID ID in their submission portals. I'm not sure if they require them but it will at least save you considerable time entering contact information with each submission. So it will save you time when it comes to submitting publications or tracking them later but there's no harm doing in advance or leaving until you submit an manuscript (to a journal that recommends them).
Upvotes: 1
|
2017/03/22
| 289
| 1,132
|
<issue_start>username_0: I wonder if there are some professors only focusing on teaching or management but no research. I think this may be feasible.<issue_comment>username_1: In the Netherlands you can have a "buitengewoon hoogleraar", who has a professor position of typically one day a week and another position somewhere else. Often this is an external entity like a company, but this has also been used to give a professor title (but not the pay) to someone who does the managent of some institute.
Upvotes: 2 <issue_comment>username_2: At my UK university we have a few academic staff whose job title is "Teaching Fellow". These are postdoctoral positions. They spend all of term time teaching, with the chance to work on some of their own research over the summer.
Upvotes: 1 <issue_comment>username_3: I am an Associate Teaching Professor. My duties include no research. I teach twice as many classes as my colleagues.
A position as a teaching-track faculty member is not uncommon in US universities. But, it isn't common either. My department has about 4% of its faculty on the teaching-track.
Very feasible.
Upvotes: 2
|
2017/03/22
| 992
| 4,202
|
<issue_start>username_0: I have sent out emails enquiring about academic positions in departments active in my field. I did my best to write them properly, consulting websites and more experienced academics.
However, it has been a week since I sent them, and have not heard back.
Should I do something? Or wait?
Clarifications:
I am in Zoology, and have enquired about potential applications.<issue_comment>username_1: This differs hugely by field and country, but in mine you don't contact the professors directly. If positions are available, then they will be advertised. Now you need inside/national knowledge where these positions will be advertised, but typically they will also appear on the website besides the "normal" venue.
Imagine how many people are looking for a job. A prof would not be able to do her job if she had to answer all the job enquiries, so they are typically just ignored. This is not very nice, bit an understandable strategy of dealing with too many e-mails.
Upvotes: 2 <issue_comment>username_2: Most good departments receive a lot of unsolicited mail with job inquiries. The harsh truth is that you are not very likely to get a positive reply by just cold-writing senior staff, even if your academic credentials are very good.
Find adverts and respond to those instead. When you do so, make sure to tailor your response to the department, rather than to send out a standard application.
Upvotes: 2 <issue_comment>username_3: I don't see anything wrong with writing to the head of a research group, you can always try that. I'm working in a biology-related field and sometimes a sudden demand for a person to analyze some samples or to go on a field trip comes up. Though this would likely be a short term position (and often is given to people known to the research group already) you might just be lucky enough to have offered your services at the right time.
This works of course best if you are looking for some "loose" contract (some part time job for a couple of months or even an internship) and if you are already around (meaning the employer does not have any responsibility to arrange for travel and accommodation). I've taken pretty much this same route, first doing an internship, afterwards a part time job and now I have a contract.
Regarding the issue that you haven't heard back yet: I would wait a little longer, at least a week, possibly a month (also because in the meantime some little job may have opened up). Afterwards write a short and friendly reminder. You might also want to encourage them to forward your email and CV to some interested colleague. If you don't hear back still, I'd forget about it.
Upvotes: 0 <issue_comment>username_4: First note that a week is not enough time to conclude that someone/some group is not going to answer you (unless there's some known deadline that's closer than a month away of which both you and your addressee are aware of, or unless you asked for an immediate reply and gave a good reason). So you have to wait a while longer. Sorry, I know this is stressful :-(
Anyway, another week or so,
1. Try phone calls instead of a second email. Either call the person you wrote or a secretariat of some kind; if you do the latter, of course, you're not reminding them of anything or complaining, you're just making the same inquiry except over the phone. If they tell you "You should email `<EMAIL>`" then you tell them "oh, but I did email him/her a week ago and didn't get an answer. Is he/her traveling?" etc.
These days you could use phone software which offers local-fare calls by taking the call over IP to the target country first. (Sorry to have to plug Microsoft, but Skype does this.) That makes it cheaper than a plain vanilla international phonecall.
2. If 1. doesn't work, try a reminder email (be very polite and don't intimate any dissatisfaction from not having been answered more quickly).
3. If 2. doesn't work, try writing/calling other people at the same research group who are not the people who should be answering you, tell them that "I've written Important Person about [...] but have not received a reply [...] so I was hoping you could [...]"
Upvotes: 0
|
2017/03/22
| 3,295
| 13,468
|
<issue_start>username_0: Sometimes an argument will arise between collaborators where two (or more) contrasted views or (mis)interpretations of a scientific issue exist. Sometimes both views are partially correct, sometimes they're both incorrect, and sometimes one is correct and the other is not.
Some people don't handle being wrong in a very "gracious" manner, even after realizing they are wrong, and will not admit their mistake. I have a previous experience with a more senior colleague who would not admit they were wrong even after confronted with *a lot* of evidence. I should also add that I was intimately familiar with the research problem at hand and they had just some very superficial knowledge of it, which probably led to their mistake. I suspect that at some point they realized they were wrong but were trying to hold the upper ground ("I'm right because I'm the more senior person") and "win" the argument. Also their ego got in the way of reason (not the first time that happened). The situation was very frustrating for me and things went sour with this person, not only because of this incident but also because of previous history.
I am facing the same problem again (with a different colleague who is also my senior) and would like to handle the situation in a less destructive manner. However, I cannot write a statement on a paper that I know to be wrong just to avoid hurting somebody's ego.
What is a good way to resolve the issue with a colleague who you know to be wrong, anticipating they may have a hard time admitting it?
---
Just to clarify (based on what I can read in the comments): my question is not about situations such as pointing out a mistake by the speaker at the end of a presentation, sometimes even with the malicious intention to embarrass a "competitor" (things you witness at conferences!). In such situations influencing factors are, e.g., present audience and lack of time to think things through. It's about stubbornly persisting on one's mistake even when confronted with evidence and given the time to think about it.<issue_comment>username_1: Use impersonal language
=======================
Hopefully this is obvious, but make sure that your language focuses on the ideas rather than the people. Rather than
>
> I don't think you're grasping the subtleties of the situation
>
>
>
try
>
> I think that there are some extra complications that need consideration in this case
>
>
>
Ask questions
=============
Rather than taking a confrontational approach, try to adopt the role of the inquisitive student. Rather than
>
> There is evidence in publications X and Y that directly contradicts what you are saying
>
>
>
try
>
> What are your reasons for disagreeing with publications X and Y?
>
>
>
or
>
> I'm having trouble understanding why the conclusions from publications X and Y do not apply here. Can you elaborate?
>
>
>
This will hopefully give your colleague a graceful way out without having to turn it into a matter of "you're wrong and I'm right". As a bonus, it also gives *you* a way out if it turns out you were wrong after all (however unlikely that is) or that you misunderstood your colleague's point. But as @R.M. and @CaptainEmacs have pointed out in the comments, take care not to overdo it, or you could come across as either clueless, or patronising.
Make someone else the scapegoat
===============================
Rather than voicing your concerns directly, put them in the mouth of a hypothetical reviewer. Rather than
>
> I think there are these reasons for rejecting that argument
>
>
>
try
>
> I think that a reviewer could object to that argument for [reasons]. I think we need to anticipate this using [different argument].
>
>
>
Upvotes: 8 [selected_answer]<issue_comment>username_2: User2390246's answer offers great advice if you need the other person to confess that they are wrong. I just want to add that this is not always necessary. In that case, the best way to deal with your stubborn interlocutor is to **agree to disagree**.
I can think of three situations in which it is desirable to have the other person concede an error:
1. *Improving your interlocutor's knowledge:* You are essentially providing a free service to the other person by pointing out their mistakes. It is their decision whether to accept this service or not. If they don't, agree to disagree. You certainly can't be expected to make someone else happy against their wish.
2. *Improving collective knowledge:* If you are working together, there are certain issues on which you have to come to an agreement -- at the latest, when it comes to writing down research findings and their interpretation. Other issues may be tangential and can be allowed to rest, or they may allow different interpretations, which may lend themselves to be framed as a discussion of the results. However, if you are not collaborating on the same project, there is even less reason to agree.
3. *Status signaling:* Insisting on a wrong (or weak) point can be perceived as being necessary to protect social status. A senior person who feels their status is already precarious may feel "called out" by a junior person who points out their mistake. The junior person may want to consider if they want to go through the trouble of holding their ground, or if they wouldn't rather play along in the status game, while distancing themselves internally. ("I know I'm right, but if you need to save face, that's fine with me.") Of course this is not an either/or question but a matter of degree.
If you don't really need to protect your status, and if by insisting on the truth you neither realistically improve your interlocutor's nor your collective knowledge, it is best to agree to disagree.
Upvotes: 4 <issue_comment>username_3: Don't let your ego win you over to fighting the battle you can never win.
So they are wrong, you know they are wrong. However you can't stop them spreading the wrongness if they truly believe it.
Let them, others will notice they are wrong too, in time.
You say about how their ego won't let it go that they are wrong about something, but you're ego is what is making you follow it up all the time. If it's important that you work with this person, accept they are wrong.
Upvotes: -1 <issue_comment>username_4: I am reminded of a quote from the Tao Te Ching: "He just does not contend, so no one can contend with him".
From my perspective, having had similar issues in the past, I came to realize it was me generating the issue. This is why it keeps recurring. It will keep recurring until you resolve whatever it is inside of you that is combative or argumentative or a part of you that threatens people or whatever it is.
Of course, I'm sure they are wrong, I'm not denying that, but that isn't really the issue. It isn't their wrongness that's generating this, again I say: it would not keep happening otherwise. Their behavior is in response to something you're doing. Even the details are the same, senior person, at work, ego, factual error etc. Best of luck.
Upvotes: 3 <issue_comment>username_5: The desire to be correct, and to be seen to be correct, can be overwhelming.
If you have evaluated the necessity of pressing your point until something's got to give - and you really must follow through with your point, and your own ego and facts are in order.
His ego likely has a 'sunken cost' on his opinions, and *his ego* might appreciate a way out of it.
Is the same level of defiance present in front of others? Has there been an opportunity to have a franker discussion or to level with him?
A professional disagreement can be maintained wholeheartedly *if handled professionally.*
If you are certain about your own standing - and are yourself committed - then maintain your stance. Maintain! If circumstances compel you to push forward with your view instead then be sure to shape it accordingly. If his view threatens to drag you backwards then by all means be defensive.
I hope this would not constitute handling the problem in a destructive manner, in your opinion. What I mean is nevertheless be prepared to categorically destroy his viewpoint: This is only in a professional setting with time-critical consequences - in any other situation simply *agree to disagree*. Arm yourself fully with facts and have at him.
Basically, I am advocating 'having teeth'.
Henning's third point, *Status signalling*, is of particular note:
>
> Insisting on a wrong (or weak) point can be perceived as being necessary to protect social status. A senior person who feels their status is already precarious may feel "called out" by a junior person who points out their mistake. The junior person may want to consider if they want to go through the trouble of holding their ground, or if they wouldn't rather play along in the status game, while distancing themselves internally. ("I know I'm right, but if you need to save face, that's fine with me.") Of course this is not an either/or question but a matter of degree.
>
>
>
You definitely want to pick your battles carefully. Be prepared to back down... temporarily.
Upvotes: 2 <issue_comment>username_6: **tl;dr:** Ask another coworker to pose the same matter to your senior.
---
Note that sometimes the person who is telling them they are wrong matters.
Some people will happily accept being told they are wrong by someone they trust/respect, but if they are told the same thing by someone they don't trust/respect/like then they will argue until the cows come home even if they know full well that they are wrong.
So one solution might be to get someone else to tell them.
If you tell one of your coworkers about your situation (preferably someone who also knows about the subject area) and can prove to them that you are right, they can go and present the same argument separately.
Firstly it's harder to argue against two people in agreement, so having two separate cases of someone telling your senior that they're wrong might lead your senior to change their mind.
As my earlier comments may have suggested, if this works it's also possible that it's actually you or something about you that your senior colleague has an issue with. It might not be the case, but if this is a recurring thing then it might be worth looking in to why that might be the case. (I've had similar issues with students at school/college in the past and have found this to be the case. Getting people they did not have issues with to present the problem as if it was their idea worked nearly every time.)
And lastly if the senior still does not back down, that will show that it is indeed a problem on their part (refusal to accept being wrong) and there's little that can be done about that.
---
Post Script:
Originally this was a comment but I decided to try posing it as an answer.
Upvotes: 1 <issue_comment>username_7: Given that the top voted comment states that "People who put belief and/or pride over evidence has[sic] no place in STEM," and other answers have given diplomatic approaches for bridging the gap, I would like to address why such a situation might occur. In particular, this situation is reminiscent of Kuhn's
[Structure of Scientific Revolutions](https://en.wikipedia.org/wiki/The_Structure_of_Scientific_Revolutions).
While that book did spawn generations of middle-management rambling about "paradigm shifts" and "thinking outside the box," it contains a very fundamental insight: as much as we would love to believe that we are purely logical researchers driven by objectivity, we are (un)fortunately loosely contained sacks of meat. We are still governed by heuristics and social dynamics, even if we try to place a higher emphasis on logical reasoning. The history of science does not read "Darwin published *On the Origin of Species* and nobody ever questioned natural selection ever again" or "Einstein wrote a couple papers in 1905 and every physicist immediately agreed with his views and threw away their old texts."
However, Kuhn does give you some solace: eventually the old guard will die out and the new theory will be regarded as the dominant paradigm. Unfortunately, you will also age until you in turn become the old guard, arguing about why the new upstart theory can't possibly be correct.
Upvotes: 3 <issue_comment>username_8: Let some time go by; strengthen your relationship in other ways; let humor diffuse the tension that's causing the logjam. Somehow this person needs to let go of some inner schematic view that's interfering with his perceptions and interpretations. Humor can work wonders. But I'm not sure you can plan it. When I've been able to use it, I had to just let it happen, rather than making it happen.
Upvotes: 2 <issue_comment>username_9: So,
>
> Some people... being wrong... will not admit their mistake... [when] confronted...
>
>
>
Wow, really? :-) Frankly, I think your question answers itself. Think about it.
>
> ... their ego got in the way of reason (not the first time that happened).
>
>
>
without considering any specific case (including not the ones your mentioned) - are you sure your own ego is not partly in play here?
PS - That's not to say that you weren't perfectly in the right; I assume you were. The thing is, in interpersonal relations, that does not necessarily matters all that much. Even if we like to think of ourselves as scientists who are "above this" somehow.
Upvotes: 0
|
2017/03/22
| 1,057
| 4,652
|
<issue_start>username_0: I am applying to faculty positions and *two* of the people I asked to be my references requested me to provide a draft of the recommendation letter. I am in a STEM field. Is this common practice *even when applying for faculty positions*? Should I take it as a really bad sign and try to find other references for the next application I submit?
Writing one draft is daunting enough, but writing two seems even more difficult. I do not know how much the draft will get edited in the end, and I also worry that the two letters might end up looking too similar. At this point I do wish I had asked someone else, but I do not think I can retract my request.
How can I distinguish the two drafts I write? An obvious thing to take into account is how these two people know me (which will distinguish them). I can also try to introduce differences in tone and style, but doing so feels just wrong. Yet not doing this risks that it will show through that both were written by the same person. This is of course assuming that the referees will make only few edits. Unfortunately I do not know what they will do with the drafts. If they take the approach form @BrianGPeterson's answer, then there shouldn't be a problem. But I was not asked for a general description of myself or the position. I was asked for a draft.<issue_comment>username_1: I request that the students I am to write recommendations for, including recommendations to faculty positions, provide me with an executive summary of the position they are applying for, and a bullet list of the strengths which they would like me to highlight that they feel would be relevant for that position.
I do not ask them to write the letter for me, just to give me enough input to help write an effective letter.
Upvotes: 0 <issue_comment>username_2: Let me answer this from a somewhat different perspective. I'm an industrial researcher and have written a fair number of letters for former interns applying to faculty positions or sometimes other research labs. I always write them myself and focus on on the experience I had with the student. And I definitely wouldn't want them putting words in my mouth.
But I've written a number of support letters for visa requests too. These are typically extremely flowery prose, talking about how absolutely wonderful the applicant is. And I **hate** to write such a letter because I know I won't do it in the same style that the immigration lawyers write it, and I worry my own letter won't be helpful to their cause. So in the rare case that they don't provide a template, I ask for one. Then I edit it to make sure I agree with what's said.
The closest example to the case at hand is when someone who has worked in my office asks both me and a colleague to provide a support letter for the same application. We're both asking for a draft, and the poor guy has to do exactly what OP here has to. To the extent that we describe our joint involvement, we're both going to describe the same project, and there is only so much they can be distinguished.
This case is a bit different, though, because such letters go back through the applicant. It means if we don't diverge enough after the recommenders edit the provided example, they (or their immigration attorney) can at least make a determination that more has to be done.
Getting back (at last) to the comparison with a faculty letter, it seems to me there are two questions:
* How do you make the "voice" of these letters seem different enough that they're not apparently written by the same person? This question applies to the immigration case too, and I think the same answer applies: if you can't figure out how to make it different enough on your own, find a trusted advisor who can take the framework (similar to the answer by @Brian-g-Peterson) and rework it on your behalf.
* How do you make sure that in the end they aren't too similar? One aspect is to make them as different as possible in the first place. Cover some points with one recommender and others with the other. Also, if there is any opportunity to review the letters, or have someone else get permission to review them on your behalf, they can be sanity checked.
Bottom line: I assume whoever takes the letter is likely to use it only as a starting point, but much of it (and possibly all of it) may survive. If they overlap, even with one (not very commonplace) sentence, it will be a huge red flag. Saying "I give XXX my strongest recommendation" is common. But not much else, if in both letters, can be explained away. Tread carefully. Or find other recommenders who'll do it themselves :)
Upvotes: 2
|
2017/03/22
| 1,593
| 6,479
|
<issue_start>username_0: At least two questions ([1](https://academia.stackexchange.com/questions/86856/asked-to-draft-own-recommendation-letter-for-faculty-application), [2](https://academia.stackexchange.com/questions/1452/points-to-remember-when-having-to-write-recommendation-letter-yourself)) on this site indicate that in more than a few isolated cases at least, letters of recommendation are in fact written by the applicant and only signed off (or perhaps rephrased somewhat) by the recommender, who would actually be responsible for writing the letter.
Is there any evidence about the prevalence of this practice that goes beyond anecdotes? I realize that cross-country (cultural) difference may play a role.<issue_comment>username_1: i think the best you'll get is anecdotes. however, my observation has been that folks typically write their own letters of recommendation and collaboration at the faculty/post-doc level. it is less common for students to have to write their own LoRs.
Upvotes: 0 <issue_comment>username_2: There's a spectrum of "writing your own LoR. " As a faculty member, for example, I might correspond with the student:
1. Asking for bullet points that the letter should hit
2. Asking for a few sentences or paragraphs of the highlights that the letter should address
3. Asking for an entire first draft that is then revised
4. A complete letter that is copy/pasted in its entirety
In principle, I write all of my LoRs myself. However, with students in large lecture courses or that I don't know that well (for example, those who have left for a couple of years, and suddenly need an LoR out of the blue), I might ask for a variation of #1 or #2. I always rewrite whatever they give me so that it is in my own words.
I especially ask for higlights for students who are going into industry since I've never been on the hiring side for industry (only academia) and don't know what should be accentuated .
From the student's perspective, though, when they get a #1 or #2 request - they might mistake it for a #3 request.
I've never asked for a #4 type letter and although there are rumors of faculty that do, I don't see how they serve either faculty or student purposes. Faculty who are lazy tend to also be risk adverse and signing your name to someone else's LoR is risky. It doesn't serve the student either as students don't know the genre of letter writing and are unlikely to be able to write a strong letter for themselves. So while I won't presume that in the entire universe of universities, such a case has existed, I would think the actually prevalence is quite low and the reality is #2 and #3 type requests that are misunderstood by the student to be #4s.
Upvotes: 3 <issue_comment>username_3: It's been 35 years now, but IIRC I had an undergrad professor who had me write my own recommendation. If my hazy memory is correct, I think the way he handled it was perfectly reasonable. I gave him the letter, and we may have gone through one or two drafts after that. Most likely it was much more time-consuming for him than if he had just written the letter himself. I think the idea was that if I had taken a freshman class from him, and he was writing a letter for me three years later, it would be very unlikely that he could remember enough to say much more than, "Good student, got an A in my class." Having me at least write the first draft would mean getting some more about me as an individual. I don't think there was any tendency for it to be inflated compared to a letter he would have written on his own. If anything, I think I was hesitant to overstate my own case.
These days if I have a really strong student, and I want to go the extra mile to write them the best possible letter, I usually ask them to provide me with lots of written materials to fill in my knowledge about their life. I have them send me their statement of purpose or admissions essay, and I try to draw them out by email or in person about their life, or things they did in my class that I had forgotten. In many cases I don't know until this point that they were in the military, or were the first in their family to go to college, or had had to overcome an invisible neurological disability. I doubt that the result of this process is much different than the hypothetical result I would have obtained by having them write a first draft of the letter.
I teach physics at a community college, and many of my students are poor writers, so letting the student literally write their own letter without revising it afterward would be a disaster. It would be an ineffective letter, and it would also make me look like I didn't know how to write. For the same reasons, if I have a student I really think is great, I will ask them to let me make comments on their statement of purpose or admissions essay before they send it out. Often what they give me is just abysmal, and they have no clue that it's bad. Many of our students are not native English speakers, or have grown up in households with no books. Their humanities instructors don't seem to require them to do much writing, and if they do require them to write, the standards seem to be incredibly low.
Upvotes: 3 <issue_comment>username_4: The official data on this question seem sparse. Judging by Google Scholar results, there are little to no academic publications related to the topic. [One web source](https://www.profellow.com/tips/how-to-respond-to-a-request-to-write-your-own-recommendation-letter/) claims that, according to their poll, 79% of respondents have been asked to write their own letter of recommendation at some point in their lives, but the page doesn't even refer back to the original poll.
However, we can also probe the question indirectly. A Google search with the "write your own letter of recommendation" query returns 14.8 mln results, while a simple "ask for a letter of recommendation" returns 48.8 mln. A [StackExchange response](https://academia.stackexchange.com/a/16541/74892) to a related question contains excerpts from some of those 15 mln websites, and many refer to such requests as "common" and "not unusual" (with regards to both job-related and academia-related situations). There is even a [Wikihow](http://www.wikihow.com/Write-Your-Own-Letter-of-Recommendation) page on writing your own letter. So, overall, it does look like a solid portion of LoR requests results in students writing the letter draft themselves.
Upvotes: 3 [selected_answer]
|
2017/03/22
| 1,533
| 6,159
|
<issue_start>username_0: A significant, perhaps growing proportion of research at universities is carried out by postdocs¹. Postdocs are typically employed on fixed-term contracts. Personally, I know several people who have been employed on chains of temporary research contracts for well over ten years, including some all the way to retirement: their work is good enough to satisfy their PI, but not good enough (or perhaps not interested) for promotion. Some may seek to move to industry, not because the work is more interesting but because a permanent contract is easier to get, such as suggested in [this Dutch-language article](http://www.intermediair.nl/carriere/een-baan-vinden/bedrijven/Hoe-versier-je-een-vast-contract-aan-de-universiteit)), but a response in the same article denies that there is a brain drain.
Is there any evidence for a brain drain from academia to industry, lured by permanent² contracts?
*NB: although personal stories/anecdotes are interesting (I could offer my own) it would be even more interesting to see if there is actual research into this question.*
---
***Edit:*** I welcome answers that challenge the assumption that job security as a researcher is more easily achievable outside academia than inside. Some PhDs do of course become professors, but I've rarely seen professors that have much time to *do* research, as opposed to *supervise* research. My question supposes the perspective of people who wish to do research themselves rather than in a supervisory role.
---
¹I use *postdoc* here in the meaning of any time-limited contract which is mainly or entirely focussed on research.
²Of course, no job is certain until retirement, but getting a mortgage without a permanent contract is likely hard/impossible.<issue_comment>username_1: I can only provide anecdotal data, I was going to do this as a comment, but it got too big.
I'm a postdoc (finished my phd in 2012), and I'm tired of it. In some cases (fapesp, brazil) postdocs don't have any rights or benefits (no vacations, for instance).
Worrying about how I'll provide for my family from next August, when my contract ends, compromises my performance... More than that, I don't have continuity, I can't think of anything medium term, because I have no idea where I'll be in one year.
I really like the projects I'm working on. But this whole thing has taken its toll on my and on my family. I'm seriously considering changing careers to get some kind of stability. I'd like to stay on research, but maybe research doesn't want to stay with me :)
update: I do not have kids, it is just me and my wife. We are postponing that exactly because I don't want to subject a kid to this. And its easier to move around when its only the two of us. If I'm honest, I don't know of any other non-religious profession that requires this kind of commitment. Maybe the military, but at least you got benefits, sometimes housing. And financial stability, if not geographical.
Upvotes: 3 <issue_comment>username_2: ### Brain drain?
The term brain drain to me has a decidedly negative connotation of a loss (here to academia).
In contrast, I'd like to point out that education is one of the main functions of a university. From that point of view, highly educated people leaving university for industry is not a loss but an *intended outcome*.
And that makes it close to impossible to measure how many researchers that *should* have stayed are leaving because of conditions\*.
Here in Germany, we distinguish between (more-or-less pure) research institutions and universities, where that educational aspect is more pronounced. And in general, academia over here produces far more people finishing "lower" levels than are required to fill in the free positions higher up (e.g. we have [2 - 3 times as many habilitations per year than newly filled professorships](https://academia.stackexchange.com/a/16638/725)).
I grant that there may be an hen-and-egg question here.
I see more of an IMHO totally unnecessary waste in the transition from researcher to research manager within academia - in that I totally agree with OP.
---
### Where to find job security?
>
> I would like to do research without wondering all my life in what country I will live 2–4 years from now.
>
>
>
IMHO this is a totally understandable wish for security and the possibility to plan ahead. I guess there are very few people who don't need this for their whole life.
>
> The question does suppose such is more achievable outside academia than inside, but do correct me if I'm wrong.
>
>
>
I'm not sure whether you get much more of this security with a career in industry - that is, without paying by something else / other drawbacks.
* The father of a friend once said that you may expect to be able to work where you like or what you like - but to be able to find employment covering both wishes would be extremely good luck.
* Industry may also expect you to move to a different country, or you may be transfered to another site which is, say, 200 km away.
* Also, those highly interesting start-up companies where working is so much fun and everyone so enthusiastic and full of ideas [like in academia] - they tend to fail (not because fun in work or enthusiasm is inherently wrong - but they are high risk enterprises - or e.g. the founders may move on to a different job, and the thing is silently closed down).
* OTOH, I know people who got themselves technical positions in academia in order to have long term contracts and reliable working hours.
* I took a different route and a now somewhere between industry and research (feel free to contact me if you'd like to know more)
**In the end you'll need to find out for yourself what your priorities are and how much career and what else you are willing to trade in for job security.**
---
\* Personally, I have left academic positions because of conditions - not the length of fixed-term contracts, though. And I found the official length of the contract far less important than how the respective institution (supervisor) deals with the fact that they can offer only fixed term contracts.
Upvotes: 2
|
2017/03/22
| 757
| 3,295
|
<issue_start>username_0: I am close to finishing a PhD in applied mathematics and I'm looking to develop future projects that go beyond my dissertation topic into other domains. One of the major subjects my advisor has suggested that I look into is an extremely saturated - but still largely "unresolved" - field. While I find the subject incredibly fascinating and challenging, I am also intimidated by the amount of work that has already been done, and I worry that I might loose the niche I've dug out for my dissertation by trying to play in someone else's big pool. I feel like I might have something to contribute, if nothing else from a "translational/interdisciplinary" perspective (applying the expertise gained during my PhD to a different field), but is there a good way to feel out where exactly I fit in? I suppose a logical answer to my own question would be to identify a post-doc position with a leading group in the field, but I am interested in hearing other opinions.<issue_comment>username_1: This is a mathematics specific answer.
I have the vague feeling that what your advisor means by "look into a subject" is different from what you think he or she means.
Most mathematicians have many failed projects in their file cabinets; in fact most probably have at least three and maybe ten failed projects for every successful one.
So, when your advisor suggests looking into this new subject, perhaps all they are suggesting is spending a couple weeks looking at some of the problems in this subject, looking at all the approaches that have been tried, and thinking about whether, from your different perspective, you have any different ideas that have some chance of being successful. If you think you have an idea, you can spend another couple weeks on it to try to see if it helps. If you don't have any ideas, or if your ideas turn out to not go anywhere, then you move on to a different set of questions.
All this costs you is a few weeks of time, and even if you don't have any ideas right away, you learn more about an important subject that you didn't know so well before. Maybe you don't have any ideas for this subject, but maybe you'll learn enough about it so that you can apply some ideas for this subject to a third subject.
Edited to add: In the context of your research career, this means you don't have to leave the niche you've dug out with your dissertation. You can continue working on questions related to your dissertation, and in addition, you can spend some of your time thinking about how what you know might contribute to this other subject. Where you fit in in this new (to you) community will naturally be determined by the actual contributions you manage to make (noting that not all contributions come in the form of a paper).
Upvotes: 3 <issue_comment>username_2: If you browse through top 10 schools (or researchers') websites, you might get a "feeling" on where (or what) their research is heading towards! For example, you can check what their newest post docs are working on, PhD students' thesis titles etc. You can also do the same thing by checking NSF newly funded proposals. This may not give you a definite answer, but it might help you establish a "directional sense"
PS.
I'm in engineering not math.
Upvotes: 4 [selected_answer]
|
2017/03/22
| 2,795
| 12,390
|
<issue_start>username_0: In some countries, labour law requires that if an individual has been employed on temporary contracts for a certain amount of time, the employer must offer them a permanent contract (or let them go). They may be reluctant to do so when labour law stipulates they need a reasonable cause to fire employees; however, if a company cannot afford to hold on to people, they can and are let go.
Somehow, different principles seem to apply within universities, where postdocs¹ and other research-funded staff are often held on contract after contract, up to an entire career long. The postdocs at my university want to persuade the university to offer postdocs a permanent contract, but with the understanding that if the money source for their salary runs out, they will be let go; just like employees in industry would. I'm told some universities already apply that principle. It would not increase the *de facto* job security for postdocs much, but it would open up the option of getting a mortgage for those who expect to stay in the same city for a long time.
Considering that universities could let go of staff if they can show they can no longer afford to hold on to them, why are they so reluctant to offer postdocs a contract that doesn't stipulate an ending date?
*NB: to the best of my knowledge, this phenomenon is global; therefore I am interested in answers that are either generic or specific to a particular country*
---
¹By postdoc I mean anyone employed by the university with a role primarily to do research. Formally my employer calls this "research funded staff" but colloquially this group is referred to as postdocs regardless of age.<issue_comment>username_1: **Because they can.** It isn't that companies are less reluctant than universities to offer permanent positions; universities -- at least in many European countries -- are in a better position legally to use fixed-term contracts.
Universities' **interests** as employers are not so different from companies. They like flexibility.
* More and more, university research is funded by third party grants. This is a rather new development in European higher education. For example [in Germany, between 1999 and 2011, third-party funding increased by 204%, while government-provided research funding increased only by 42%](https://www.academics.de/wissenschaft/der_druck_waechst_56991.html). These grants have a fixed duration and therefore don't allow for the long-term financing of permanent positions. To create permanent positions, financing would have to rely more on a university's basic budget, in turn requiring larger university funding, which in Europe typically comes from public education budgets. Thus, at the end of the day this is a political decision about the allocation of public spending with all the distributive bargaining that it entails.
* Postdocs are hired by PIs or department heads with permanent positions. Those have mixed incentives. On the one hand, PIs need postdocs to keep their research going. On the other hand, PIs want to remain free in their own career choices, which implies the freedom to move to a different institution. In that case, they either have to convince their current postdocs to come along and the new institution to bring those postdocs on board. The latter impairs their own 'hireability' and their bargaining stance vis-à-vis the new institution; the first depends on the goodwill of the postdoc. Or the PI can leave the current postdocs behind, but this will not be welcomed by the old institution, as it impairs its attractiveness for any successors who might want to bring in or hire their own staff.
Thus universities have similar interests as companies about flexible employment (although for partly different reasons). However, they have different **opportunities**:
* Short fixed-term contracts indeed allow more flexibility than permanent positions. Letting go of employees on permanent positions is costly for the employer, and it is not easy: Compensations have to be paid, notice periods and social criteria (determining e.g. who has to be fired first) have to be obeyed, and perhaps even "social plans" have to be negotiated with the unions. The scope and stringency of dismissal protection varies between countries; by and large is is stronger in scandinavian social democratic systems and in continental conservative systems with a tradition of "neocorporatist" social partnership, such as Germany and France. It is more limited in liberal systems like the UK, the U.S. and in Eastern Europe. In any case, the point of dismissal protection is obviously to make dismissals more difficult.
* Employers can use fixed-term contracts to work around this constraint, but governments set legal limits to this strategy by stipulating when work contracts can be time-bound. In Germany, for example, the *Befristungsgesetz* (time-limitation law) says that companies can use fixed term contracts for more than two years only for certain reasons (e.g. parental leave replacement). However, the legal bounds of fixed-term contracts are much more lenient for universities. The *Wissenschaftszeitvertragsgesetz* (fixed-term contract law for the sciences -- a nice German compound noun) allows fixed-term contracts for various reasons, most of which are trivial in academia. Examples include that the position is financed by soft-money (i.e. postdocs) or that the position also serves a training purpose (PhD students). I assume that similar laws exist in other countries.
In sum, universities as employers have similar interests compared to companies, but they have different opportunities.
Upvotes: 2 <issue_comment>username_2: Regarding the general employment laws, there is usually a catch with the "let go if you run out of money". Once you trigger that process, the law puts limitations on how you can hire in the future. For example, in Finland, if you lay off someone because you want to discontinue that role in the company, you are not allowed to employ anyone for the same or very similar role for a year and you are supposed to offer them an option of transferring to a different role with open vacancies, even with lower salary. (Which logically makes sense, coz otherwise companies would always use that or some other similar extenuating circumstance argument as a loophole to fire anyone unwanted with a permanent contract).
Upvotes: 2 <issue_comment>username_3: Using the term "postdoc" may cause confusion, because to me a postdoc is by definition an early-career researcher hired for a limited period. But the question as I understand it has nothing specifically to do with the terminology. It amounts to why universities don't offer long-term contracts conditionally on maintaining adequate external funding, but rather offer successive short-term contracts.
One answer is that sometimes they do offer long-term contracts. In U.S. academic medical research, it's common to have long-term "soft money" positions for which most or all of the funding comes from grants. There are even tenured soft money positions, in which you have a guarantee that you cannot be fired without good reason, but where there is no obligation for the university to pay the salary except as supplied by external grants. (Sometimes they are responsible for a small percentage of it, and sometimes none at all.) This becomes less common as you move further from medicine, but long-term soft money positions exist across science and engineering in the U.S., at least in small numbers.
Why not always do this? One reason is to avoid raising expectations. My impression is that it's rare to offer a long-term soft money contract unless there's a reasonably high chance that money will continue to be available for quite some time. Even if there are no legal obligations, it's terrifically awkward to fire someone from a position for which they mistakenly believed funding would continue, and it's easy for people on soft money to be irrationally optimistic. (You see this frequently with tenure-track jobs: someone starts by recognizing that they genuinely might not get tenure, but after a year or two their nerves have settled and they find it tougher to take the possibility seriously.) This means there's a strong incentive not to offer long-term contracts that are too speculative.
Another issue is whether the department or university really wants this person to have a long-term affiliation. There's sometimes a feeling that people on short-term contracts don't really count for the department's reputation, since they are just visiting anyway, while people on long-term contracts count more. This means other faculty members have more of a reason to exercise oversight.
Upvotes: 4 <issue_comment>username_4: This answer is from a mostly German perspective, but I guess parts of it would apply to other countries with similar labour laws.
The key problem is who to "let go" when funding is short. Are you always going to fire the person that happened to work on the most recently finished project if there is no new funding coming in around that time? Well, according to labour laws **you can't do that** with unlimited contracts.
In industry, if money is running short for a company, it's not only up to the company to decide who will have to go and who will stay. Instead, a dismissal plan has to be carefully worked out between for example the union and the company, and deciding who has to go is to a large extent based on social aspects and time someone has stayed with the company. Take this to academia, and you'd always have to fire the youngest person when money runs short, which for sure universities don't want to do.
Also, in contrast to companies who can run up losses against their capital for some time, universities (at least in the systems I know) can not do that, so they don't really have the flexibility to tide over a bad time in funding.
Upvotes: 3 <issue_comment>username_5: I'm going to question the concept of "continuing funding source" and its expiration. Most postdocs paid from grants are in fact paid from different grants at different times, and sometimes from department money. The postdoc may or may not be aware of this, but in all cases where I have paid postdocs (quite a number of cases), we've frequently switched from which account someone is paid, in some cases making the change retroactively.
In other words, it is difficult to define what it would actually mean to have continuing funding. If I have paid a postdoc from 3 different grants in the past, all of which have run out, but I have a fourth grant that was approved, would that require me to continue paying the postdoc until that fourth grant also runs out?
Upvotes: 2 <issue_comment>username_6: *Adding (quite late!) a slightly different perspective to the excellent answers already there.*
As far as I know, in many countries in Western Europe, the long term dynamic is actually the opposite: a few decades ago, the academic workforce used to consist of a large majority of permanent employees (with public servant status or similar). Quite consistently with this model, funding was largely channeled directly to research institutions which then enjoyed a quite high degree of freedom as to which projects they choose to fund.
In the past couple of decades, these European countries progressively switched to the US/UK competitive model. Similarly to the capitalist model in economy, competition is meant to encourage the best researchers to strive for the best funding opportunities. This naturally implies a significant reorganization of the way resources are allocated: in order to maintain a permanent competition between researchers and institutions, most of the funding is allocated on a "project" basis: researchers submit their project, projects are evaluated and compared against each other, and the winning team gets the money. Projects must be limited in time for the same reasons.
Given the very high degree of specialization involved in every research project and the uncertainty for any specialized team to secure funding in the long term, it wouldn't make any economic sense for an academic institution to keep a large proportion of permanent researchers on their payroll; the model clearly calls for hiring specialized research staff on a temporary basis, for the duration of the funding.
Upvotes: 1
|
2017/03/22
| 1,378
| 5,607
|
<issue_start>username_0: A student advocate at my institution recently claimed that grad students have more debt than undergrads. This surprised me, because I would expect that typical grad students are the kind of student that would be able to gather more financial aid (scholarships, awards, fellowships, etc.) during their time as an undergrad than typical undergrads. Furthermore, most PhD students (and many masters students) have their tuition waived while working as a TA/RA, so they accrue little/no additional debt. Of course, this likely varies with field, too: students in STEM are more likely to have institutional funding while law or medical students often have to pay their own way.
While I don't know if the statistic I quoted was comparing undergrads who haven't yet accrued their full measure of debt to grads who have all of their undergrad debt, it begs an interesting question:
**Do undergraduate students typically have more student debt than graduate students have, and (broadly) how does this vary with field of study?**
The comparison ought to be made after each group graduates from their respective program, since comparing a 1st year undergraduate's debt to a 1st year PhD student's debt clearly would not be helpful.<issue_comment>username_1: I think it's silly to talk about cumulative debt, adding undergraduate debt to whatever a student might accrue as a grad student, and instead ask whether grad students accumulate more as grad students than undergrads do as undergrads. And the answer, I'm pretty sure, varies enormously across fields. Many students at top law schools borrow the entire cost of their education (say $220K as of a year or two ago), but they expect to earn enough to pay down that debt and make it worthwhile. Some places offer fellowships, even covering the total cost over those 3 years, to attract excellent students. But most are borrowing up to their noses. Same for med schools I imagine. In arts & sciences, students often can get fellowships or funding as research assistants or teaching assistants. When I got a PhD in computer science I didn't borrow any money as a grad student.
Bottom line: I think the typical debt load in certain fields (law, medicine, maybe business) is higher per/year than undergrads -- who generally can't borrow the entire cost -- and in others it is (close to) nonexistent.
Upvotes: 0 <issue_comment>username_2: While the article "[The Shame of PhD Debt](https://theprofessorisin.com/2014/01/22/the-shame-of-ph-d-debt/)" does not directly answer the question of whether graduate or undergraduate students have more debt, you may well find it a worthwhile read because it explores some of the reasons for accruing significant debt in graduate programs. You may find the [surveys](https://docs.google.com/spreadsheet/ccc?key=<KEY>&usp=sharing) referred to in this article especially useful for thinking about and understanding debt at all levels in higher education. These surveys contain self-reported data from graduate students across disciplines, who explain how much debt they accrued at undergraduate/graduate levels and why.
According to the data from the second survey (which I *strongly* encourage you to have a look at), there is great variety among amounts of debt even within fields. Although it does appear that 'hard science' types are more likely to escape both graduate and undergraduate education with less debt than arts & humanities students, I found it surprising that many of the people with zero debt were from arts, humanities, and social science fields. Also, most of the people with zero debt did not necessarily have terrific scholarships. Rather, they were independently wealthy or had a spouse/partner supporting them. Again, I suggest you look at the data, yourself. (If anyone feels so compelled, perhaps we could discuss our thoughts on the data in chat to avoid clogging the comments section.)
Note: I think it is worth pointing out that some people conceptualize time in graduate school as 'lost wages,' and if a student also has unpaid undergraduate debt, that debt continues to grow during graduate school. Lost wages and loan interest could mean that many graduate students end up with far more debt that some undergraduates, though I am sure this varies greatly by field. I cannot find a source that directly answers your original question, but according to the [Institute for College Access and Success](http://ticas.org/posd/map-state-data), 68% of (public/non-profit) college seniors in the U.S. left undergraduate programs with debt, with an average of $30,000. This should provide some frame of reference for thinking about the accrual of debt through undergraduate into graduate school.
Upvotes: 0 <issue_comment>username_3: I think you are underestimating the number of graduate students who do not get any form of financial support. Like you said, most students in STEM or engineering PhD programs full receive full support. PhD students in humanities/social sciences departments are less funded compared to STEM students, but at least are given tuition remissions.
On the other hand, majority of MA students in *any* discipline generally receives no support. Then there is MBA students, law students, doctoral residents etc. They are the ones who make up a big portion of all graduate students and most of them have to get student loan to complete their graduate programs.
For example, a typical law school graduate would end up with 60k-100k+ loans at the time they pass the bar and get a job.
Upvotes: 3 [selected_answer]
|
2017/03/22
| 1,253
| 5,323
|
<issue_start>username_0: I am writing this after much consideration, in a hope that I will be able to get practical thoughts to deal with the situation. I have recently been diagnosed with major depression. Although I have never talked about it in public, most of my life so far I was dealing with it without even knowing there is a help out there. Most of the time I will just ignore or pretend that I am feeling nothing. My emotional ups and downs have been a major barrier in my progress. I found myself most of the time pushing myself with extra motivation, but then again it wears off.
Most of my academic life I have been told that I am not consistent. In my high school, I would do very good in one term and then I would do very bad in the next. I was in no category. My teachers thought I am good student but I just simply don't study sometime that's why I am not consistent. But I know the hard truth. Listening to sad songs, feeling sad for simple reasons, getting angry and feeling lonely, I thought all these are internal characteristic of me that I can never overcome. After my high school, with lots of motivation, I was able to get into one of the top-50 colleges in the US. Note that I am an international student. I made that with financial grant and scholarship from that college.
After I came here, I already felt the cultural difference but I was quick enough to blend in. However, my depression was not. I worked hard to sound and speak like native speakers and learned English fluently, although I was still struggling with my emotion. I was even embarrassed to talk to anybody because there are so many talented students in our school from all over the world and I was feeling like an imposter. Then I saw this view on growth mindset by Prof. <NAME> and I decided no matter what happens, I will make it to graduation.
I am rising senior now. With all that rickety events in my academic progress, I was still hanging in there and then decided to get expert helps.It was then when I learned about my depression. I just started to take medication. This semester at one point I was so depressed that I did not go to take an exam. I was diagnosed with major depression after that period. Most of my professors were kind and gave me second chances in terms of homework and lab work. However, this particular professor seems not to have any empathy whatsoever. I tried my best to make him understand but he insists that I drop out of the course. And if I drop out then I have to take the course in summer, which will jeopardize my internship offer. Another class next semester follows this class and I must take this class now in order to graduate on time.
I would like to know if there is any rule in academia for students with a mental disability that will help me at this point and also do you have any suggestion for me for making my academic experience much better in general, considering my situation.<issue_comment>username_1: There is a structure for dealing with this in the U.S.: a Section 504 Accommodation Plan. If you get your health status established with the Office for Students with Disabilities (it might have a slightly different title in your school), you won't have to negotiate directly with an individual professor. That office will do that for you.
Your school should have instructions posted online, and you can also email, phone or visit. I will warn you that some of these offices are warm and fuzzy and supportive and some are not. You might do well to start assembling some preliminary documentation before meeting with them. If you could ask around in your university to find out what others' experience with that office has been, that could be helpful.
You will need medical documentation of your diagnosis, how it affects your ability to function academically, and recommendations for accommodations. It is often helpful to brainstorm a list of accommodations you think could be helpful, and share it with the medical provider who is going to be your primary documenter.
In the long run, I think it will be helpful if you can build your identity as a student with a disability in a broader way than "student with major depression," and see yourself as part of a larger group of students with many different types of disabilities. I love this article that documents the beginnings of 504: <https://dredf.org/504-sit-in-20th-anniversary/short-history-of-the-504-sit-in/>
I have to warn you that in some places it can take a while to get this set up. Since you are almost done, it could be a frustrating process. But if you have any interest in grad school, it would be worthwhile, in the long run.
Upvotes: 5 [selected_answer]<issue_comment>username_2: I'm so sorry to hear about your struggle. Many people do not understand that major depression is a medical condition and not a personality deficit, which may be why some professors won't understand.
Since you have a diagnosis, you already have made the first step in receiving a disability accommodation. Go to the student disability office and you will talk with a counselor/advocate that will help you develop an accommodation plan. You can also ask him or her to arrange a meeting with your professors so that your professors understand that this is a valid issue for accommodation.
Good luck!
Upvotes: 0
|
2017/03/23
| 3,006
| 11,794
|
<issue_start>username_0: At most schools an A is a 4.0, an A- is a 3.7, and so on. I feel like this system is not really representative of a person's true skill. For example, a person who got a 100% in a class will get the same GPA as a person who got a 94% in the class. However, a person who gets a 91% in a class will get a significantly higher GPA than someone who got an 89%. Why don't schools base GPA off the actual percentage that someone gets in a class instead of their letter grades?<issue_comment>username_1: The big reason: exact numerical scores are not comparable across classes and professors.
Suppose I told you that two students took a class offered in different semesters. We look at their transcript and see that student A scored a 94% with Professor X and Student B scored a 99% with Professor Y. Student B beat out the first person by five percentage points, but do we actually know that Student B is a better student? Maybe Professor Y is a soft grader or Professor X had a really harsh curve that semester. We don't really know whether Student B is better. All we really know is that both students did pretty well.
Hence, grades tend to be assigned and interpreted with a large degree of subjectivity, which fits the ABCDF model better than a score-based model. The general interpretation is:
```
A - excellent
B - good
C - average
D - needs improvement
F - failing
```
If the first and second student both get A's, then this means that an expert in their fields (the professors) have said that both students did excellent. This isn't a perfect system, but it is about as good a comparison as you can get.
Upvotes: 3 <issue_comment>username_2: ### Historically, simplicity
For most of history, it would've been tedious to average all of a student's course grades weighted by credit hours. It was a lot easier to call an "A" a 4, a "B" a 3, etc., then just average.
It made curving a lot easier, too. Teachers could just sort the grades, then the top 10% got an A. Who wants to calculate an actual normal curve by hand? Heck, how many teachers even knew how to?
### Today, cultural inertia
Today, it'd be a lot easier to do the calculation without all of the arbitrary rounding. We can do away with the arbitrary break point that's between 92.9% and 93.0%, and we can do away with how a 93% is the same thing as a 100%.
We don't need a simple method for curving because we've got computers and spreadsheets that'll calculate an actual bell curve, rather than the weird sorta-bell-curve that comes from splitting students up into discrete grade categories like A/B/C/D/F.
Today, we can do away with that common situation where students reason that getting a 75% on the final exam is the same thing as getting a 100%, leading them to not study or review the material.
But, suppose that an instructor or dean reads this question and agrees that the current system is messed up. What could they actually do about it? They're caught up in cultural inertia; same reason those of us in America still use inches/feet/yards/miles rather than meters with a metric prefix.
Upvotes: 0 <issue_comment>username_3: >
> At most schools an A is a 4.0, an A- is a 3.7, and so on.
>
>
>
This is true at most American colleges and universities, yes. (It's not true in most of the rest of the world, and the discrepancy is an issue when one wants to compare students from different countries, e.g. in graduate admissions.)
>
> I feel like this system is not really representative of a person's true skill.
>
>
>
I have given out hundreds of grades in university courses, and I make no claim that a student's course grade is "really representative of their true skill." For instance, it has happened that I wrote grad school letters for an A- student and an A student for whom my primary interaction was teaching them the same course and that I wrote an overall stronger letter for the A- student: based on my interactions with him, I felt that his true skill was higher, whereas the A student did noticeably better on the midterm and the final (though both did very well).
>
> For example, a person who got a 100% in a class will get the same GPA as a person who got a 94% in the class.
>
>
>
There seems to be a premise of this question that universities use standard "letter grade cutoffs." This is really not always true (though it is sometimes true, and it would be interesting to understand this better). These cutoffs have not been applied at any of the universities I've been affiliated with as a student or instructor: University of Chicago, Harvard, McGill, University of Georgia. (See e.g. Question 12 [here](http://www.bulletin.uga.edu/PlusMinusGradingFAQ.html).) To be honest -- and in part because my own experience with American universities, though quite temporally extensive, is far from universal and probably even from generic -- such talk reminds me of high school, and I get surprised when university students think too seriously about it. (And this sometimes includes my own university students!) In the STEM fields in particular, it is common for exams to be written in such a way that a 50% grade would be a clear A and a 90% grade would be preposterous.
Let me say though that I have not seen it go the other way: in any class I have ever taken or taught: yes, 94% is worth the same letter grade as 100%. One could go on at great length about this, but for now let me say: I see nothing inherently problematic here from anyone's perspective.
Rather I would like to call attention to the fact that there is a mistake above: students who get a 100% and a 94% will probably get the same *course* grade. GPA means grade point *average*. This error becomes more clear as follows:
>
> However, a person who gets a 91% in a class will get a significantly higher GPA than someone who got an 89%.
>
>
>
No, this is really not the case. The typical American university student takes about 40 courses overall. So a student who gets A's in all but one course and fails the other course will have a GPA of (39\*4 + 1\*0)/40 = 39\*4/40 = 3.9. If that student got at least a C+ instead of an F, their GPA rounded to one decimal place would be 4.0. In fact, my undergraduate GPA rounded to one decimal place was 4.0, though I remember well the course in which I received a B (the first graduate course I took) and more vaguely that I got less than A grades in at least two other courses. (Because culturally speaking a "4.0 GPA" generally implies all A's, I reported my GPA to two decimal places.)
This is, I think, a really key point: the inference of skill and achievement from course grades is a **statistical process**, and like most statistical processes, investing too much meaning in any one data point is dubious. I am currently directing graduate admission in the math department at UGA. There are occasions when individual course grades are meaningful to us: a student who has generally good grades but quite poor ones in two or three key courses that are the most foundational to graduate success is viewed negatively beyond the influence of the GPA (and this of course is why we do not just look at GPAs but get much more information, including full transcripts and lists of textbooks from the courses taken in the major). But for one of these key courses -- say real analysis -- what if one student gets a B+ and another gets an A-? Then we really don't care, and if we don't care, I'm not sure who would.
Let me finally make a few more remarks about the system of letter grades at universities.
* There is nothing *especially* clever or apt about it. Another answer claims that we have the system basically due to historical inertia. The answer goes on to say some other things that I disagree with, but I certainly do agree with this. It is easy to pick apart the particulars of the system -- why no E? why no A+? [or if you do have an A+ -- as some universities do -- how do you figure that into the GPA?] Why pluses and minuses at all? (In fact, UGA had no plus/minus grades for many years, and the [aforelinked FAQ](http://www.bulletin.uga.edu/PlusMinusGradingFAQ.html) is in fact an FAQ about the use of plus/minus grading!) Most importantly **Why choose the same grading scale as is used in K-12 education, so that students will come to college/university with many preconceived notions about how grades will be assigned that they will gradually find out can be quite inaccurate?**
* A wider range of grades is not necessarily "better" or "more accurate." You hear a lot about grade inflation, and it is interesting that the language subtly conveys that it is somehow a problem. It is much more interesting to try to explain *why* it's a problem. One argument advanced along those lines is that it blurs distinctions between academic achievements. My colleague <NAME> [wrote a nice article demolishing this argument](http://www.slate.com/articles/life/do_the_math/2002/10/dont_worry_about_grade_inflation.html) some years ago. The main idea is the one I gave above: students are taking a large number of courses. We could have just two possible grades, "excellent" and "very good," and as along as instructors assign the "very good" grade to students in a broad enough set of circumstances, over time the magic of probability and statistics will serve to separate the students. In fact I think I would prefer a grading scale that has fewer grades and that is not reminiscent of high school grading. For me a very natural scale would be one that has three grades: the lowest one is given to students who have not met (sufficiently many) clearly defined course objectives. It would roughly correspond to the "F" in the current system, but it should be given for failure to meet objectives, *not* for a numerical score in a certain range or for the bottom X% of the class. The top grade should be given to students who excelled in the course in some meaningful way (*not* for a numerical score in....). The middle grade should be given to everyone else. I think that such a system would lose little or no information from the present one, and more to the point, it is to a large degree how I think about letter grades as given.
* I admit that it is entirely debatable, but I actually feel that it would be a net *negative* to record percentage grades in transcripts rather than to "discretize" as is currently done. If you're a young student, then maybe you're proud of your 98% and want it to be recognized as better than your colleague's 94%. But a context in which students are fighting over that 4% is not necessarily conducive to better learning. For me, this situation conveys strong memories of high school, in which our (weighted) percentage scores in each course were used to compute our **class rank**. This led to a cohort of students who were highly motivated to get the highest possible scores on every exam. I remember students studying for several more hours in order to make sure they had memorized 100% of the material instead of 97% of it. But this memory space was relatively short term: it would have to be vacated for the next course, if not the next exam. These bright young people could have used this time in more valuable ways, academically and otherwise. By the way, the grapes may be sour but perhaps not in the way you'd expect: I was the valedictorian (i.e, class rank 1) of my high school class. At the time I had the suspicion that this achievement was less significant than it was being made out to be. Looking back from the middle of an academic career I can now confirm this. University students are significantly more grade-conscious than is beneficial to them in any way, including academically. Blurring the distinction between 98% and 94% seems quite healthy.
Upvotes: 2
|
2017/03/23
| 3,980
| 17,140
|
<issue_start>username_0: I'm a grad student teaching assistant for a freshmen undergrad non major multivariable calculus class.
What they didn't tell me is that the second half of the semester is not calculus but "finite math". This week's topic is game theory, which I've never learned myself.
In preparing tomorrow's material, I realized I was getting all the answers wrong from one of the sections, and have no idea how to do the problems I'm supposed to teach. Normally it only takes a day to read the textbook and prepare.
I've been up for hours struggling with the same problems and need to leave in 5 hours to get to class on time - and would still like to sleep.
My school never gave me any training or told me any rules; am I allowed to just cancel class?
I'll take any advice at this point.
Edit: I would just like to say how much I appreciate everyone's responses. I'm running on little sleep, (and *extremely* new to Stack Exchange so still learning how the site works) but you've all given me great ideas when I was truly stressing out.
Edit/Update: turns out I was not doing it incorrectly, just did algebraic work when I should've used a graph so I considered points that I didn't need to which is why my answers were wrong. I met up with the individual who TAd for the course last semester and that's how I figured out my mistake. I sorta winged it in presentation and got the correct answer, now just have to repeat the lesson for the rest of the sessions.
Thank you all for your input.
**Edit** (from comment): Game theory is just one week of the course; they have two lectures with the professor prior to one 50 minute session with me. This is the only session with me out of a year long course that has been or will be like this.<issue_comment>username_1: Use tomorrow's class for review of what's been covered so far, including problem solving. Then go straight to your supervisor.
Do you have a graduate student employee union?
**Edit** (after having read the additional information in a comment -- which I just incorporated into the question):
The additional information gives me a different view of the situation than I first understood.
Now that I've gotten this fuller picture of the situation:
* Prepare your discussion sections more in advance, so you'll notice in good time if a particular topic needs more careful preparation.
* Try a different resource, and/or ask the professor or a more advanced student for help understanding the material.
Upvotes: 5 <issue_comment>username_2: You almost certainly aren't supposed to cancel class arbitrarily, just because you feel like it. If you admit that you are cancelling class because you weren't able to prepare adequately, you'll look bad. If you have to cancel, your safest option may be to pretend you aren't feeling well or have some other extenuating circumstances. Aside from the dishonesty, that will put a lot of pressure on you to catch up in time for the next class. (If you cancel because of illness and then have to admit you really aren't prepared to teach this material, it will look terrible.)
Instead, I'd recommend the following priorities:
1. If you can sort things out in the five hours before class, that's the best option. Do you have any friends who have taught the class before and could help? That would be far more efficient than trying to figure it all out yourself while in a panic.
2. If you can't prepare in time, can you rearrange the material a little? For example, you could insert a review of what you've done so far before you move on to game theory, or you could start by introducing just the aspects you understand, and delay the hard parts until next time.
3. If you are completely stuck and can't figure out any plan that won't humiliate you and waste the students' time, then it's probably best to call in sick and then work as hard as you can to catch up.
Upvotes: 4 <issue_comment>username_3: I would like to add on top of the answers of @username_1 and @username_2.
So, first thing, in 5 hours it would be very hard to come up with something solid that you would be comfortable lecturing about; thus I second the suggestion of doing some review and exercises for now.
My opinion is: **you can do it**.
If you have been teaching calculus so far, chances are that you are a very competent student. You can learn almost at the same time as you will be teaching. And this does not have to be a stressful experience, I would say you can be upfront with your students and let them know that you are learning at the same time as them (they already know that you are a student too).
Actually, this can be beneficial for both parts: *you*) are learning a new topic and because of that you better understand the difficulties that students may come upon; *they*) have a TA that understands their difficulties and that is able to explain things in a way that *a student* would learn (it may not be the best way, but I would say that is OK).
In the case they make you questions you do not know how to answer, again, be upfront: say that it is a good question, you will think and research about it, and you will explain it later, either in the class, e-mail directly to the student, whatever you prefer.
In order for this to work, like mentioned in the other answers, you have to talk to your supervisor and let him know that you were not prepared to lecture this topic. Ask him for help about learning those concepts, or a reference to whom could provide such guidance.
Upvotes: 7 [selected_answer]<issue_comment>username_4: Sounds like you took care of it well.
For those considering in the future, I'd say you ***can*** cancel class. There's no need to be very specific in your reasoning for why to the students.
As a TA, I was to make a 4 hour drive back to campus after a weekend away, and my car wouldn't start. Not much I could do. Not a situation I was excited to explain. But not one with much choice at that point. It means less learning time, which is no good for anyone (if you are properly interested in seeing them learn as much as possible as well!). But there are all kinds of reasons people cancel class, many where teachers don't want to go into details. And even some potentially valid ones involved with class preparation (a large project/test you'd built the class time around suddenly disappearing into the computing void, vital class materials not coming in, etc). It's not something you should consider lightly, but if it's important to cancel class, then it's important to cancel class. You're fairly unlikely to get significant challenge. The only ones who really might be in a situation to challenge you are your superiors. But even in that less likely circumstance, you can try repeating the limited explanation... or give a full explanation (they may not be TOO happy with it, but they'll probably just have to take it, give a warning about being prepared, and move on). If you you've done your best and you're really convinced it's the best option, you do what you have to do, you take any consequences it takes, and you move forward. Quality teachers sometimes have to face unhappy individuals, and if it's your fault, that you didn't pay enough attention to the course plan or did not prepare soon enough, you commit to working hard so it doesn't happen again, and then you deal with the current fallout. You can't change the past, you can only make the best choices in the current. And if that means canceling classes, so be it.
But, indeed, **see if you can adapt your class so that you don't have to cancel** to get around the trouble. Review, cover a topic scheduled for another time, focus on the background aspects that you do understand, show a video, whatever will *usefully* consume the time. Usefully being the key word. Cancelling isn't the worst option. Wasting time is worse. Maybe not to how you are thought of, but to the betterment of your students.
If you do have to cancel, try your best to **supplement the missed time**. Send out extra homework. Make a YouTube video lecture. Put together an application project. If the students are expected to check electronic medium, you can probably fairly expect them to do it (you could also put a printed copy of your message on the classroom door for any who may unaware come to the classroom). If they aren't expected to keep track of media, you can probably still reasonably expect them to make up some of the lost time with a bit of extra homework later. Meet them in the middle, you don't want to swamp their schedule, especially with busywork, but giving them, for instance, a 20 minute video lecture + practice problems posted later in the week when it was to be an hour class probably won't be seen as too problematic for most... and you can be a little more adaptable if there are some that maybe don't have scheduling flexibility to do more extra work outside of class later in the week. The point is, you try to do your job the best you can, and be supportive to them in doing it, they'll meet you half way. I believe every student did the replacement work that I sent out for that class that I missed 11 years ago. Undergraduates usually are so excited to get an hour off classwork, they don't mind doing a bit of extra work at home!
**Not understanding material well can be a very dangerous situation**. I saw one of the other answers or comments suggested that being transparent on such shortcomings would lead to revolt, but the alternative, pretending that you know it well when you don't, can be absolutely devastating. In situations like a preset college course, maybe you don't tell them you're struggling. But you also work very hard to get ahead of it. Going in and trying to work a few "simple" problems when you're struggling with the topic is just asking for a mess. The one thing worse than cancelling class or delaying (or even skipping) material is to come into class and teach it wrong, make repeated mistakes, and absolutely confuse everyone. We've probably all had teachers come in and absolutely make a mess of problems, and we left understanding the material less than before. I had a teacher in my major who repeatedly did that. That's where you lose control of a class, when you pretend to know what you're doing but don't show it. Your number one job as a teacher isn't keeping the schedule, but making sure you're *improving* upon any that they learn independently. As such, in many courses, textbook reading is required. Indeed, in many courses the the expectation isn't that they're necessarily reading it all the time before class. But when they do reading, most students should at least a loose handle on the information. Coming in and making mistakes is bound to destroy that. If indeed you cancelled class, emphasize to them that this is the one time where the textbook reading is even more necessary, so that you are able to pick up moving ahead next week, and reduce the hit of the missed class.
I've certainly been hit with situations through the years where I had new material to suddenly teach. As a private school teacher, I often substituted in a variety of courses. Or in my own classes, like chemistry or history, I offered a little freedom to investigate any topics that particularly interested my students... and ended up doing cram sessions on stomach digestion or Native American tribes. Students threw new-fangled common core math approaches at me, and occasionally it'd appear so abstruse that I needed to pass on it until next class to not waste time and focus. No one expects us to know everything. Even a major professor will get a question they don't know how to answer occasionally, or a problem that they'll manage to struggle with unexpectedly. Instead of trying to force it, take a step back, tell them you'll look back into it further, and then follow up the next week, or online, (or directly to the student outside of class, if it's indeed an unnecessary tangent that will be more detriment than gain to other students). Pretending to know something you don't seems the cardinal sin in teaching. It suggests the students should have learn the same undercurrents of pride and dishonesty. It may not be quite so fitting for college courses, **but there are definitely many teaching situations where you should admit to your students you don't perfectly understand**. It shows them that you're not so different from them and need to work at it sometimes too\*\*. And ends up encouraging confidence and effort from the students who need it most.
And you did well. If you're hit with a situation where you're just not making it, **don't be afraid to ask for help**. You may be embarrassed, you may even look poor in the eyes of your superior, but it's better to do what is necessary than to pretend. There's no guarantees you'll get the help, but there's often a wealth of assistance right at your front door if you'll just be bold enough to seek it.
Overall, you adapted. You fought hard to overcome it, and you kept the best interest of your students at the front. Decorum be darned, that's what teaching is all about. And we need more teachers doing that!
Upvotes: 3 <issue_comment>username_5: This semester, after a few months of teaching a very specialized but little known programming language (INSEL), students were pleased with my lessons but expressed the desire to learn something more well-known.
I know Ruby very well, and could write Ruby code in my sleep. It's not really widespread in academia though, so I feared they would also complain about it. In comparison, Python is probably used in every university on Earth.
For some reason, I claimed "I could teach you **RUBY** or (whispering) Python, if you want". All the students cheered "YEAH!!! PYTHON!!!!". The big problem was : I didn't know Python at all! The syntax is often similar to Ruby, but that was about all I knew about it.
I had less than 2 weeks to learn it before teaching it. It was stressful but exhiliarating to learn so much new material in such a short time. I used <https://learnpythonthehardway.org/> and began answering easy questions on stackoverflow.
I also was very honest with the students, telling them I still had a lot to learn about the language. I also warned them that I very well might not know how to answer a question. If that happened, we would train a very useful skill together : Using Google with the correct search terms, finding the corresponding StackOverflow thread, picking the correct answer and adapting it to our problem.
During 3 full days of hands-on Python programming, it only happened once. Students were happy and learned a lot. They asked very insightful questions, which often helped me learn the language better. Depending on their progress and feedback, it was also easier to adapt the next lesson to their needs.
So yes, you can do it, you just need to be very honest, humble and motivated. Be sure to bring enough material (books or laptop+Internet). That way, if you don't know an answer, you can involve the students in the process of looking for a solution. You will learn something, they will train an essential skill and time will pass faster that way ;)
Good luck!
Upvotes: 3 <issue_comment>username_6: Sounds like you or your department is very disorganized. That "winging" mentality is what got you in this situation in the first place. I suggest you change the way you approach things, get rid of the "winging" mentality. Know your sh\*t. Teaching is not theatrics, it's not a show. And for the teachers that believe it is about theatrics they're garbage.
Upvotes: -1 <issue_comment>username_7: A few ideas sprung to mind when I read about your dilemma.
1) Skip ahead to a topic that you *are* ready to handle. That would give you a couple days to keep working on the topic that has you flummoxed. You can always go back to it next week.
2) Buy some time by holding a “review session”. Take a handful of the more recent topics covered, and reexamine those topics. One good way to do this is to come up with a complicated problem that requires a couple different techniques covered. That way, this becomes a worthwhile lesson in how to synthesize some of the techniques recently taught in the class.
3) Try turning this into an active learning exercise. Typically, you would solve a problem in front of the class. This time, have the students guide you into solving the problem. Chances are you’ve got a few bright students in your class that can crack this nut.
Admittedly, #3 is a risky proposition. If the students ultimately encounter as much trouble as you’ve had in figuring this out, it might make for an awkward moment in the classroom. But I have tried this before, and it was successful. (Even if they can’t solve the problem completely on their own, a student might provide a nudge in the right direction that helps you figure it out on the fly.)
If things don’t work out how you’d like, you can still “save face”, as long as you’re able to figure it out down the road and eventually explain to the students what you were unable to solve during the “active learning” session.
Upvotes: 2
|
2017/03/23
| 1,893
| 8,042
|
<issue_start>username_0: I've begun preparing my thesis and several key papers in my field are written by people with foreign accents in their names. While I have found a way to include this in my thesis with [BibTeX](https://en.wikibooks.org/wiki/LaTeX/Special_Characters#Escaped_codes). However it has come to my attention that many citations exported from online journals (as .rif, .enl, or .bib) or my previous EndNote library may not include accents in non-English names.
While I would like to attribute these appropriately it is clearly a laborious task to check for the sheer number of references included in a thesis unless strictly necessary. Therefore I have two related questions:
1. Is it acceptable academic conduct to cite authors without correct accents when citing them in an English language thesis (or publication)?
2. Is it common practice for reference managers or online export tools to support accents or author names (i.e., can I take it for granted that my existing library has included these characters correctly or will be necessary to check references previously imported by online databases)?
To clarify, this query concerns accents such as those found in names originating in French, Italian, Spanish, Hungarian, German, Scandinavian and Te Reo Māori languages. Such as the symbols:
ó, ò, ő, õ, ø, ö, and ō<issue_comment>username_1: As somebody with accents in the name who does not put them on scientific papers, I would recommend going with the way they put their names on the papers.
Certain people are ***very*** particular about having the right accents, but others (including myself) consider them a nuisance and avoid them. The only way to know in the particular case is to check the paper and stick with the format on the original paper.
As far as ethics goes, I have never heard of any such policy, but there are certain people who will be your eternal enemy if you write their name incorrectly (i.e. without the right accents).
Upvotes: 8 [selected_answer]<issue_comment>username_2: I'm not sure how far ethics comes into this, but omitting accents is, effectively, a misspelling of the name. If you need to cite something by Schön, and you instead write "Schon", I'd regard that as analogous to writing "Schöm" or "Schöh": pretty close, but certainly a typo. If I'm reviewing a paper, I will request correction of missing accents as I would for other misspellings, but I wouldn't regard it as an ethical breach.
In practice, there's a bit more leeway for missing accents than for other misspellings, partly due to former limitations in computer typesetting, and partly due the dominance of English as a scientific language and the frequent belief among native English speakers that accents are always optional. As username_1 says, some authors won't mind missing accents, while others will care a lot -- but since you can't know which is the case for any particular author, you should keep the accents. In some cases, missing an accent may have unfortunate consequences in the author's native language: if you cite a Swedish Dr Hörberg as "Horberg", you've just turned them into "whore mountain".
For citation purposes, the correct spelling is (almost always) the one on the publication itself: if, say, <NAME> chooses to publish as <NAME> then that's how you cite him, no matter what it says on his birth certificate.
On the technical side: unfortunately you **cannot** take it for granted that exported bibliographic records will handle non-ASCII characters correctly, or for that matter that they will handle anything correctly. Bibliography record formats are a mess of partially supported, poorly defined standards interconverted by buggy code, and you should always eyeball the record after importing it to catch any errors (not only accents).
As you say, it's clearly a laborious task, but so are many aspects of writing a thesis :).
Upvotes: 7 <issue_comment>username_3: I fully agree and can give extra input into @BurakUlgut's answer (+1), and @username_2s answer about the names in the paper itself. I want to reinforce the advice that you should *use the names as the author's themselves put on the papers*. This will prevent confusion in some cases.
In the country where I was born my name contains an accent (actually a non-latin1 letter, which is a variation of the letter `l`), and it is written that way on my birth certificate. Yet, since I moved out of my country of birth when I was a child and in the following two countries that I lived and worked as an adult I never used the accent because my parents omitted it from my entrance documents. On the single paper I have published to this date my name figures without the accent.
Now, *had you gone through the trouble of checking my nationality and checking how my name is properly written in my country of origin you would make a mistake in the citation*. Moreover since the name that would be written in the citation would not be possible to link to my name in almost any of my documents.
Yes, this is a strange corner case. But, I believe, that it illustrates well why the names on the paper itself are a good choice for citations; i.e. assumptions you make about someone's name may be quite wrong.
---
99% of my documents say `Michal`, I was born `Michał`, and people often make mistakes like `Michael`. (And I tell everyone to call me *Mike* to make things simple)
Upvotes: 4 <issue_comment>username_4: Generally it is standard practice to use as much of the original accent marks as the format allows. (For instance a 7-bit ANSI or 7-Bit ASCII Email is very limited in what characters it can display.)
It deemed respectful to use the added effort to use special computer tools to support the proper accent marks. Most word processors have these built in, but if not your OS should have a character select tool with "related characters" to find the various accented versions of a particular letter.
Additionally, you should check if there is a localized version of the research material for your region, some authors like to use localized versions of their pen-names, and in this case it would be rude to use the wrong pen-name, and citing a localized version of a resource can provide a lot of ease to the audience of your work when they are doing further research.
Upvotes: 1 <issue_comment>username_5: The purpose of a reference is primarily to allow others to follow along with your thought process and independently consult the materials you used to support your statements. As such they need to be accurate enough to allow your readers to find the original papers.
The second purpose is to be able to "traverse the tree of knowledge" by finding all other papers that cited a particular one. This means automated tools must be able to recognize the citation. Most tools strip accents - precisely because of this problem.
Everything else is courtesy - if people get offended because of a missing diacritical mark, that is unfortunate; but it is hardly a breach of ethics.
EDIT: Note that when I answered the question, it asked whether it was "ethical" not to get all the diacritical marks. The question was since edited to ask if it was "bad". It is clear that some people will take umbrage when you misspell their name; unfortunately, the same authors' names are not always spelled the same way in different papers, which frankly increases the level of confusion when you *try* to do it right (if you referenced Erdős here and Erdös there... is that the same person? People with average eyesight might not even be able to tell the difference. Neither might average proof editors. Only one of these spelling refers to a famous mathematician. But if you Google "Erdos" you will find him, either way.)
Upvotes: 2 <issue_comment>username_6: This isn't a matter of ethics: it's a matter of respect. It is disrespectful to spell somebody's name incorrectly. It is doubly disrespectful to *knowingly* spell somebody's name incorrectly because you're too damned lazy to do it right.
Upvotes: 5
|
2017/03/23
| 1,024
| 4,665
|
<issue_start>username_0: I recently received the results of student evaluations. After my colleague (who is qualified to make such determinations) observed one of my lectures and said it went very well, a small group of students took it upon themselves to make some very personal, hurtful, and untruthful comments about me and my teaching.
Unfortunately, it is quite a small class and only one other student did the evaluation this year so it looks really bad with all the negative comments and ratings. The place where I work has decided to take no notice of the comments from the lecture observation, nor to the fact that most of the students' comments are easily refutable based on videos that are made of each lecture. They are planning to fire me, despite my having ok (though not perfect) student evaluations in the past.
I personally think it is ridiculous to give student comments such a high importance and ignore anything else, because students often give ratings based on how much homework they get and how easy they think the exam is going to be. One student in the past commented that he had learned a lot in my course, and then proceeded to give me the lowest numerical ratings possible.
Do I have any recourse here? If nobody at my institution will look at the videos to see that the statements made by the students are false, can I make a claim for wrongful dismissal? Can the students be held responsible for their lies? The questonnaires are anonymous but they were done online so it would be possible for the institution to find out the identities of the students.
Thanks.
Update:
Thank you very much for your replies.
This has gone all the way to the point where I have a meeting soon where I'm going to be told whether the head of the department intends to terminate my employment.
At my request, an investigation was done into my allegation that a colleague had influenced my students' opinions against me, but the evidence I presented (emails from that person from before the questionnaires were distributed matching very closely some of the students' comments, the notes from the lecture observation, and videos of my lectures, not to mention my own written and verbal testimony) were not only ignored, but it was claimed that I had provided "no evidence" for my allegation.
I have just found a video of a lecture given by my colleague in which he is seen and heard to completely trash me and make fun of me in a conversation with my students before his lecture begins. I look forward to presenting this at the upcoming meeting.
Does anyone have any predictions for the outcome of this? Will this new video be considered or will they just ignore it as well and fire me anyway?
Thanks again.<issue_comment>username_1: Assuming you are right and undeserved the hateful comments, I would say it depends largely on the size of the institution and the nature of your position. I have attended/taught both a tiny liberal arts college and a big state university. Most instructors in small liberal art colleges are there to just teach classes, while big university professors usually have research duties which takes priority.
What I am saying is that if your college is small and your only job is to teach classes, then there is probably nothing you can do about how serious the department take students' comments.
Upvotes: -1 <issue_comment>username_2: To answer this question: "Can the students be held responsible for their lies? The questonnaires are anonymous but they were done online so it would be possible for the institution to find out the identities of the students."
Probably not. If the university promised the students their comments were anonymous, then it probably cannot reveal their identities, which would rule out holding them responsible. Most likely the university did not retain records of who wrote which comments, so as to protect itself against accidental disclosure or subpoenas seeking to deanonymize the comments.
Upvotes: 3 <issue_comment>username_3: In the UK at least they cannot (usually) just fire you. They have to follow the proper procedures.
If it's deemed to be sufficiently bad for immediate dismissal, there should be some system for investigating what happened, beyond reading an anonymous comment made at the end of term. Why was a formal complaint not submitted by the student at the time?
If it's not that serious, the dismissal process should involve them giving you time to 'make improvements'. In this sort of situation I would expect that to include further lesson observations that are taken note of, or at least a set of student evaluations from a future course.
Upvotes: 1
|
2017/03/23
| 860
| 3,895
|
<issue_start>username_0: I'm planning a course on advanced calculus. Some of my topics align quite well with entire chapters of the textbook, so it makes sense for me to teach the entire topic, link it to a textbook chapter and additional resources, and assign homework from the end of the chapter. However, some of my topics take material from several chapters of the textbook and other resources, either because it's a very quick review or because the textbook presents things in a way I don't like.
What is happening is that some topics are getting a disproportionate amount of suggested practice, given how much time we spend on them in lecture. For example, I'm dedicating only one 50-minute lecture to the four topics of vectors, lines, planes, and distance, since they've already seen this stuff 3 times. However, that's 5 textbook chapters, and it's looking to be about 40 suggested practice problems. On the other hand, I'm dedicating the exact same amount of time to a single topic, parametric equations in space, and they will only get about 20 suggested practice problems. For the course itself, parametric equations are a more important topic and should be practiced a lot more. However, I don't want to assign just one question per topic for the review stuff, since practice and review is important.
Is there a nice way to communicate to students that a particular topic is more important **for their understanding of the course**, but that another topic should be reviewed until they are comfortable so that the important topics are simpler?<issue_comment>username_1: A few options are open to you here.
1. Simply highlight a proportion of the questions. So out of the 40 you have for vectors, lines, planes and distance just highlight or set half of them for homework. Pick and choose problems that give a good overview of the entire section so that they get a taste of different types of problems. Of course if students feel the need then there are plenty more questions in these sections that they may attempt in the text book which is no harm.
2. Make up new parametric equation questions. Have a look at the questions that are there. Are there any styles of questions that are left out? If so create some based on that style. If there are not simply modify the questions that are there already. Try to come up with corner cases to test their understanding (or at least show them there is a bit more to the topic). Cases that display certain tricks or potentially weird (from your students point of view) answers. You will have to make the judgement call as to what is missing to the current set of questions yourself.
3. A combination of the above. Highlight a selection of the vector questions while adding in a few parametric equation questions.
Finally in addition to the above you can explicitly mention that these topics are more important for the course. While a bit of a blunt force object if they are expecting trickier/more parametric equations they will spend more time studying it. You can also explicitly state that studying the other topics will help them with the parametric stuff.
Upvotes: 3 [selected_answer]<issue_comment>username_2: If you feel all 40 exercises are necessary, then just spread them out, sprinkling a few into each of the next few assignments.
Another approach would be to give the students a self-evaluation at the beginning, with a subset of the 40 exercises. For any students having difficulty with these, assign the rest of the 40 (but perhaps spread out over a week so they're not all in a lump).
There is really no reason education has to be one-size-fits-all. You can build some differentiation into your course.
If this material is review, then it is important to detect any deficiencies early on, and then help the individual students with deficiencies remedy them, so they will do well in the rest of the course.
Upvotes: 1
|
2017/03/23
| 3,174
| 13,887
|
<issue_start>username_0: Different conferences have different standards of accepting papers. A paper rejected at a conference A might be best paper at a conference B (in a lower rank).
I'm reviewing a paper for a conference, that is not really related to my narrow field, that I don't know its ranking, that I haven't read any paper from previous series.
The paper is so so, not too strong, not too weak. It would be a straight reject in some top-tier conferences, but might be accepted/weak accepted at some other lower rank conferences. How do I make a decision about this paper for this particular conference?
An obvious solutions is to read a couple of papers from previous series to have a sense, but given it is not very related to my narrow field, it will take me a lot of time, and I'm having several other deadlines.
---
**UPDATE**: to answer <NAME>'s comments.
>
> So I think your first task must be to decide whether to referee the
> paper at all
>
>
>
I have never submitted to a journal before, but I think one of the main difference between journals and conferences is that: in a conference, **the chairs can't always select the right reviewers for a paper**.
In a conference, there is a fixed set of PC members. After the deadline for abstractions, the PC members start bidding for the papers they want/don't want to review. The PC chairs then have a painful task: assigns papers to PC members (there are tool supported, but mostly useless). Obviously, there is never the case that everybody is satisfied, it's very rare that you get the papers that you want to review, and it's not uncommon that you get the papers that you don't want to review. When you are a PC member, **you can't refuse to review the papers assigned to you**.
If the conference adopts double-blinded review, you are not allowed to assign external reviewers either. In the case you can ask external reviewers to do reviewing for you, since those external reviewers do the reviewing without any credit (at least PC members have their names in the conference's web site), in most of the cases you can only assign to your postdocs/PhD students. So now you understand why I have to review this paper, and that refusing to review papers is not an option (my boss is the unlucky PC member).<issue_comment>username_1: If the paper isn't related to your field and you're not familiar with the conference, how did you get roped into reviewing it?
But given that you did, I suggest you provide detailed feedback on the paper, without focusing too much on the actual recommendation. As a commenter mentioned, it doesn't sound like you're a PC member, more an external reviewer, so if you either leave it to the program committee to evaluate based on your feedback, or offer a weak reject or weak accept rating (if you have to) with text *privately to the PC* saying that you aren't calibrated for the conference, that should be enough.
I do **not** think you need to spend inordinate amounts of time reading other papers from the conference simply to gauge its competitiveness.
Upvotes: 2 <issue_comment>username_2: As I understood, your confusion is this: There is a paper *X* which has been submitted to conf-A and you have to review it and give the recommendation. Fact is that *rank(A) < rank(top-tier)*. I am a CS guy and can answer this with my somewhat philosophical way.
I have reviewed many works till now, and in my experience, I never incorporate the rank (or, popularity) of the venue (say it journal/conference) into my review quality i.e. the recommendation.
We all are doing scientific duty, we are laborers of Science. We just have to work on our assigned duty. It would be really wise to do that duty with right commitment and without compromising on the quality of the outcome.
IMHO, it would be better if you could review the paper just as a scientific work, rather than focusing on the venue ranking etc. Give your best review and let the conference committee take the final call.
The computer science research has degraded recently because of some miscreant venues such as bogus journals and conferences. So, it is high time that all researchers do their duty (work, review, edits etc.) honestly and with high quality.
Upvotes: -1 <issue_comment>username_3: You're overthinking this. The answer is very simple: if you like the paper and think it advances science in some meaningful way, recommend acceptance. If not, recommend rejection. That's all there is to it. The ranking of the conference is irrelevant.
The point is that the conference organizers have decided to put *you* in charge of making a recommendation based on *your* taste and *your* standards. You are now the leader and the tastemaker (to a limited extent, since your recommendation is still subject to review by the program committee), so act on that role - lead, don't follow, which means recommending based on what *you* think is the right decision. And if you impose higher standards than is typical for this conference, well, then the conference will actually be slightly more highly regarded next year; it is precisely through the collective leadership of reviewers and editors that conferences and journals acquire their reputation. So don't worry about conforming to other people's notions of how selective the conference should be. Just make up your own and go with that.
Upvotes: -1 <issue_comment>username_4: First let me say that I work in a field (pure math) that is closely related intellectually but for which approximately zero percent of papers are published in conferences. In this case I don't see how it matters that it is a conference rather than a journal except possibly to reduce the timeline until the decision (though some journals in my field are starting to ask for reports within a few weeks!), so I will answer based on my own experience.
The main problem here is that your expert opinion has been called for on something that you do not feel like a fully fledged expert and that (as is most often the case!) it is not practical for you to fix this by substantially increasing your expertise.
So I think your first task must be to decide whether to referee the paper at all. If you for instance you didn't understand the paper at all then certainly you should not be a referee. If you've already agreed to do it, that makes backing out of it more socially awkward, but it could still be the right decision, as it may take some reading of the paper to find the stuff you don't understand. If you vaguely understand it but really not enough to evaluate it in any meaningful way, again I think you should not referee it. If you like, you can explicitly offer to referee something else instead, and I'll bet the chances are good that a different paper could be found.
If you feel that you can or must referee the paper but really feel shaky about evaluating it, then you can write a report in such a way that your evaluation will be minimized, e.g. by explicitly writing something like
>
> I was asked to review this paper, and so I will. But I want to be clear that it lies outside of my core area of expertise, so my recommendations about its suitability for the conference are tentative. I hope that more weight will be put on the other recommendations.
>
>
>
And then you can give a "weak" recommendation, which should then be drowned out by the others.
However, your question suggests that you are sufficiently qualified to referee the paper that the first alternative is not appropriate and the second one may not be either: you write
>
> The paper is so so, not too strong, not too weak. It would be a straight reject in some top-tier conferences, but might be accepted/weak accepted at some other lower rank conferences. How do I make a decision about this paper for this particular conference?
>
>
>
[Point of order: a paper can't be "weak accepted" by a conference. It must either be accepted or rejected. So here you are doing a bit of what other people have brought up: conflating the referee's recommendation with the editor's decision.]
Rather importantly, you don't say *why* the paper would be a clear reject at top conferences. (Not the assertion is surprising: presumably the vast majority of submitted papers in your field fall into this category.) Knowing that means that you have some information and insight about the paper (and more than you've told us). In fact it might be helpful for you to tell us how you came to this conclusion.
Probably I don't need to tell you that if the paper does not contain any correct, novel work that is of interest to someone, it should not be published anywhere. Conversely, if it does meet these requirements it should be published somewhere. If you don't know enough about the standards of this particular conference, you can work around that by explaining you would recommend rejection in venue A but recommend acceptance in venue B (this is rather common information in referee reports, at least in my field). Then the editor can decide where the conference lies with respect to the data points you've provided.
If you feel qualified enough, then at a certain point you do have to impose whatever standard you feel is most reasonable. If you don't know the field very well and the paper is not interesting to you, then [assuming you've decided to go ahead with the referee job] you should recommend it for rejection: what else? On the other hand, if you find the paper to be at least somewhat interesting you should act so as to leave the door open for the paper to be accepted, in particular by writing that you would recommend it for acceptance at journal or conference X. Ultimately you're leaving a decision to the editorial board that they would have anyway: among papers that are publishable in absolute terms, do they want to publish *these* papers or *those* papers? If they get a whole bunch of reports of the form "The paper is okay; it could be published somewhere in between heaven and hell; I don't really have strong feelings about it" then they're going to have a problem....but who is to blame for the problem? Them, of course: they did not find the right referees and were not clear enough in their standards.
Let me end by saying that I have found myself in similar situations more than once: namely, I get asked to referee a paper by a journal I've never heard of, in a subfield of mathematics different from any of the ones I've thought deeply about. And in fact I have usually done more or less the above: sometimes I turn it down (and I have learned to be decisive about this; if I gave some choice to the editor, it always turns out that they want me to do it anyway), but if I can do it I often take the job. A paper should certainly look novel to the relative outsider if it will look so to the expert, and in some cases I have rejected the paper for not making clear progress over (even) its own citations. More often the papers have been a bit interesting. Sometimes I have found significant mistakes: I can't remember a situation where the mistake was so bad that I outright recommended rejection, but there have been situations in which revisions have been necessary in order for the principal results to look correct. (I don't know how this plays out for the shorter timeframe of a conference: presumably outright rejection becomes more likely.) In fact the most common outcome is that I understand the paper well enough and think it's somewhat interesting and novel, though certainly not the kind of breakthrough to be published in a higher tier journal. In these cases I have recommended the paper for acceptance and included in my referee report an honest depiction of the situation: e.g. if I am unfamiliar with the journal, I say so. I believe in every such case the paper has been accepted. This has been fine for me, since having been an author many times and an editor never, fundamentally I am more sympathetic to the situation of a solid paper being rejected than to the plight of a journal that publishes a good paper rather than a great one. "A tie goes to the author," I feel. Given the number of papers I've refereed, this attitude seems to be okay with the editors, who do in fact once in a (great) while cheerfully reverse my recommendations.
Upvotes: 4 [selected_answer]<issue_comment>username_5: First, some conferences will have two (or more) reviewers per abstract/paper, so the conference planners might consider consensus. Your review alone may not be the deciding factor.
Second, the goal of conference presentations is completely different that that of journals. They are meant for the discussion of and generation of new ideas. So, the stakes are fairly low if you have a paper that is on the fence (especially if you cannot evaluate the paper within its specific field). If you can give qualitative feedback, that would be helpful for the conference committee. However, when I review conference papers I always consider that the author may be a graduate student or junior faculty. This is a learning experience for them. So, my advice? If the topic is relevant and would be interesting to the conference audience there is fairly little harm in accepting. Every conference - even prestigious ones - have some poor presentations (either poor topics or poor presentation content/delivery). If the one you accept ends up being not that great, the conference's reputation is not necessarily harmed.
Upvotes: 0 <issue_comment>username_6: Why should the ranking of the conference matter? Presumably your decision should and will be based on contents of the submission, not on the prestige of the event.
Select events will attract a larger number of "good" papers, and so you might have to reject a larger number of submissions but otherwise let the science decide for you. Good is good and bad is bad in any context.
Upvotes: 0
|
2017/03/23
| 426
| 1,741
|
<issue_start>username_0: In a paper I want to state that a certain course of action increases the "chances of success" - with success defined earlier in the work.
In my field, words like "probability", "odds", and "likelihood" (and the associated variants) have certain connotations, and are likely to elicit semantic arguments, so I'd like to avoid those words.
The word "chances" seems colloquial.
Are there any other phrases that connote the idea that a certain course of action is more likely to lead to optimal outcomes, while avoiding any words that have mathematical or statistical connotations?<issue_comment>username_1: My guess is that "optimal outcomes" may have the same problem as "likelihood" etc.
One possibility would be to restructure the sentence a bit, for example
>
> Such-and-so procedures promote ... (perhaps "success").
>
>
>
It's much easier to help with a question like this if you give us some of the actual language you're going to use.
Upvotes: 2 <issue_comment>username_2: When I am drafting a paper I always keep a tab open in my web browser with wordreference.com English thesaurus. You can find synonyms, antonyms, etc fairly easily just by entering a word which you know to roughly express the idea you have in mind. This is particularly helpful to me as a non-native speaker but I can see how everyone could benefit from a similar strategy.
For your particular case, I suggest that you start by looking up "probability" and "success" until you end up with the noun or combination of adjective+noun that works best for you.
Upvotes: 1 <issue_comment>username_3: "Propensity" is a useful term, similarly you can reference an increase/decrease in the "tendency" of an event or success.
Upvotes: -1
|
2017/03/23
| 857
| 3,561
|
<issue_start>username_0: I'm currently working in a research institution as a RA, and I'm thinking of asking my professor for recommendation letter for PhD application. But because of mandatory national service in my home country, I have to go back and serve my country for 2 years before going into PhD. In that duration of 2 years, I'll have minimum contact with my professor, if anything.
Should I ask for a recommendation now, given that his memory of me is much more fresh, or should I ask him after I'm done with my mandatory national service, at which I will be applying?
I've heard both cases of arguments:
**Pros of asking earlier**
1. Professor will forget the minute details about my projects and my skill (two years isn't that long, but I don't think it's short either, given that he has interaction with many other RAs and students)
2. It will give him plenty (perhaps too much?) of time to write the recommendation letter when he is free.
3. It is more likely that he will write a letter for someone that is working for him now, then someone that has worked for him in the past, since he does not have to see me when he rejects my request, but it will be awkward if he rejects my request now when I still work for him.
**Pros of asking later**
1. Professor may fear that I may decide not to pursue PhD after that duration has past, and his effort in that letter might be in vain. While I'm confident that I'll apply after that 2 years has past, I cannot give a proof to him.
2. If I've accomplished anything during the time I was away, then he would like to put it in the letter. In my case, I'm doubtful if I can do any academic work when I'm serving my country, but who knows.
I'd like to hear if there are any other reasons to favor one option over the other, and ultimately to know which option to take.
I've seen related question: [Letter of Rec. for future University application](https://academia.stackexchange.com/questions/41013/letter-of-rec-for-future-university-application) but this is quite irrelevant since most answers deal with the question of asking the police chief for letter.
[This](https://academia.stackexchange.com/questions/66567/when-is-the-best-time-to-ask-for-letters-of-reference) is also related, but my circumstance is different from the OP in this post.<issue_comment>username_1: I would suggest explaining your situation to the professor and asking him to write the letter now.
Later on, if you do manage to do some extra work you can ask him to revise the reference letter. The revisions will be quick and easy, but writing a reference letter from scratch after two years might be difficult.
The professor should not be disappointed if he writes the letter but you decide not to continue in academia. A good reference letter will also be very useful when applying for other jobs.
Upvotes: 3 [selected_answer]<issue_comment>username_2: Agreed! In the past I have asked faculty to write a recommendation letter because I got to know them during my class, but wasn't planning to attend grad school for a few years. They may be willing to keep a draft on file that can be printed out in two years or give you a number of sealed copies that have generic salutations (Dear Reviewers). Many professors would not mind doing this and actually would prefer to write a letter that they can write with ease because their memory is fresh, rather than the difficult task of trying to remember the student's performance years later. The letter will also be of higher quality if written soon!
Good luck!
Upvotes: 1
|
2017/03/23
| 537
| 2,096
|
<issue_start>username_0: I am supervising a bachelor thesis and find the respective student attractive. Would it be ethical/acceptable to get into a relationship with the student after the thesis is completed?<issue_comment>username_1: It is not ethical nor acceptable for you to do. After the supervision is technically over you still have power over the student, such as through reference letters.
Upvotes: -1 <issue_comment>username_2: The ethical, and safe, thing to do -- now, actually -- would be to look for another faculty member to pass this thesis supervision responsibility to, because your objectivity has been, or could be seen to be, compromised by your feelings.
I was given some very good advice once by a senior professor: when contemplating a particular action, imagine what it would look like if described in a newspaper headline.
>
> "Emcor Involved in Romantic Relationship with Student S/He Was Providing Direct Thesis Supervision to"
>
>
>
Doesn't look so good. Compare:
>
> "Emcor Hands Off Thesis Supervision to Colleague before Embarking on Romantic Relationship with Student"
>
>
>
Better, no?
This doesn't mean you can't read a draft of the thesis, or that you can't be a sounding board for the work. It just means that you shouldn't be the thesis supervisor.
That's how I see it. However, if you are in any doubt, do check with a department administrator.
Upvotes: 4 <issue_comment>username_3: It is neither ethical nor safe to get into any kind of relationship with a current student. Depending on your institution, it may be prohibited conduct. Brown's rule: "You can lust after girls, or you can lust after boys, but you can *never* lust after students."
Even former students aren't safe. Some years ago a graduate whose senior project I had supervised showed up in my office and announced that she had come to take me to dinner. We began seeing each other, *and then* she decided to return to our institution for a master's degree. That resulted in a self-initiated, but very uncomfortable, meeting with dean and VPAA.
Upvotes: 4 [selected_answer]
|
2017/03/23
| 3,577
| 15,071
|
<issue_start>username_0: If I'm going to give students the course X in next semester. Should I have full understanding of the subject before I teach them or can I learn about the topics before I go to the class and then teach them?
The idea that I want to reach is it good to study the subject in the same semester or should I study before?<issue_comment>username_1: >
> . . . the first time I was in a statistics course, I was there to
> teach it.
> - [<NAME>](https://www.stat.berkeley.edu/~brill/Papers/life.pdf)
>
>
>
In my experience, it's definitely better to learn about it beforehand. If you are really pressed for time, then start reading the textbook or course material *from the back*, because you really need to know the direction in which the course is going before you teach the material.
Too many times, I have done the opposite and tried to learn the subject at the same time as teaching it. This is really fun and certainly helps you to see things from the perspective of the students, but is not ideal from a pedagogical point of view, as you are too likely to be blindsided by something. The same goes for courses which are part of a series. For example, I once taught a linear algebra class and left out determinants because we were running out of time and we didn't "need" them for the exam. Next term, I got an irate email from the instructor of Linear Algebra 2...
The best possible prepartion for teaching the course in X is having taught it before.
Upvotes: 7 [selected_answer]<issue_comment>username_2: A real scholar is a lifelong student.
But you should know the material that you are teaching sufficiently broad (like you won't be caught ignorant of some relevant factoids) and sufficiently deep (like you won't be caught not **understanding** some relevant factoid) that your authority of the subject is not questioned and you won't be embarrassed. (And the students will not think they're getting screwed with regard to their tuition dollar.)
Upvotes: 5 <issue_comment>username_3: It's possible (barely). When I started adjunct teaching, among the courses I was offered was a networking class, including lab component, of which I had absolutely zero knowledge. I told the dean that, and he said, "Don't worry; you can just stay ahead of the students by a few pages in the book." And I accepted it.
Now, that was pretty terrible/scammy advice from that dean. First of all, I had to make a determination in the first week to switch the book he'd previously picked for the course because it was terrible (way over our students' heads). Also, I was working furiously for hours every day all semester to learn the material and write presentations. I didn't have time for any activities that semester except just teaching.
I wouldn't recommend going *directly* from reading the book right to class; I was rushing to get as far ahead as I could, and so prep future quizzes, tests, assignments, labs, lectures, etc., in a scaffolded manner, know what was coming in the future for which to lay the groundwork, etc., etc. I can't imagine that I would have succeeded at that if I was simultaneously taking classes myself, expected to work on papers or research, etc.
But I did wind up knowing an incredible amount about networking, pretty much all in one thunderclap three-month period.
Upvotes: 3 <issue_comment>username_4: After finishing the university, I was offered a teaching job from the education center. The course was web application development. At that time, I didn't have much knowledge in this field, I was still elementary-learner created few websites so far. Then, I said yes without hesitating since it was a good chance improve my knowledge. I used to work hard every time a day before each class. I did well, next year again I got this course and I improved my teaching ability as well. So, it's a good way to improve knowledge on the specific field, however you should enjoy teaching as well. If you don't like teaching, it's difficult.
Upvotes: 1 <issue_comment>username_5: **TL;DR: You owe your students to familiarize yourself well enough with the subject matter before teaching it.**
>
> Should I have full understanding of the subject before I teach them or can I learn about the topics before I go to the class and then teach them?
>
>
>
While it is usually not impossible to teach a subject as you study it yourself, or immediately after you've done so - it is highly inappropriate, ethically, both as an academic, and in terms of pedagogic effectiveness:
* You are risking passing weaknesses in your understanding after an initial/brief study on to your students.
* You will not be able to provide your students with the perspective of someone who has dealt with the material in more than one context.
* You are likely not to be able to answer some of the students' questions about the subject matter.
* You will likely not be able to plan for getting finer points, ones which are more difficult to understand, over to the students - you will be expending most of your efforts in merely remembering what you've just learned.
et cetera. So - don't slack off.
Upvotes: 5 <issue_comment>username_6: I had three experiences as a student with similar situations:
1. A professor without much subject knowledge teaches an internet programming course while attempting to create an entirely new syllabus. Result: the worst course I've ever suffered through. Absolutely no learning.
2. A professor with a strong emphasis in language design teaches a language design course in a language they started learning last semester. Result: not a smooth experience, but rewarding.
3. A TA with some experience teaches a well-worn 300-level course on distributed design. Professor is technically supposed to teach but is only present ~half of the time, so TA ends up doing much of the teaching and was actually a better lecturer (less rambling.) Result: a good course. One of the most informative I had that year.
My conclusion is that it depends on how close your relevant experience is and on whether or not a quality (or at least decent) syllabus is available. Having veteran TA's can greatly help, although that's independently true.
My recommendation is that you should study and prepare aggressively for this course. Try to learn as much as you can in advance and then also make sure to refresh your knowledge before specific lectures and study sessions.
Upvotes: 3 <issue_comment>username_7: It's *best* if you study the materials first, and would be generally expected in contemporary training scenarios, but it's not *strictly* necessary.
More important is that you be able to spot where students are having trouble with *learning* the materials, and set them straight.
Just because you know something yourself doesn't mean you can teach it. A lot of expert doctors are not expert *teachers;* the same is true of expert programmers, mathematicians and other experts.
**The skill of a teacher *as a teacher* is more important than the teacher's skill as a practitioner of the subject taught.** The teacher's skill is in identifying the blocks his students encounter in assimilating the subject, and handling them so the student successfully masters the subject.
---
At the very least **you should have some idea of the questions that students are likely to ask and know where they can find the correct answers**—even if you don't know all those correct answers at the tip of your tongue.
I teach a class on Git, the popular version control software. Git has options and commands for literally *every* scenario imaginable, the result of an open-source tool being used by (and improved by!) open source developers for more than a decade. Any given student will never need more than 15% (at most!) of the available options and commands.
When a student asks me a complex and highly specific use case for Git, it is sufficient that I point him in the correct direction: what commands are relevant to his question and what options will help him accomplish his purpose. **I don't have to show off *my* mastery of those advanced options to help him learn them.**
---
Finally, here is a relevant excerpt from one of my favorite Science Fiction novels:
>
> I liked Prof. He would teach *anything*. Wouldn't matter that he knew nothing about it; if pupil wanted it, he would smile and set a price, locate materials, stay a few lessons ahead. Or barely even if he found it tough—never pretended to know more than he did. Took algebra from him and by time we reached cubics I corrected his probs as often as he did mine—but he charged into each lesson gaily.
>
>
> I started electronics under him, soon was teaching him. So he stopped charging and we went along together until he dug up an engineer willing to daylight for extra money—whereupon we both paid new teacher and Prof tried to stick with me, thumb-fingered and slow, but happy to be stretching his mind.
>
>
> *—<NAME>, The Moon is a Harsh Mistress*
>
>
>
Upvotes: 2 <issue_comment>username_8: username_3's dean advising "you can just stay ahead of the students by a few pages in the book" is clearly an exaggeration, but it has some truth on it.
I must admit that I have taught several courses where if I had been given the final test of the course in my first day teaching, I probably would have failed it. Anyway, those courses were the most challenging but also the most rewarding I've ever taught.
The key point is that - as others have pointed - you must be able to teach properly every lesson when you teach it. That means not making (big) mistakes, addressing the key points and possible blocks of students, and being able to answer clever students'questions that might depart from the syllabus. In summary, your classes must be able to outperform what the students could learn by themselves, that nowadays includes reading the book but also watching classes in Youtube about the same topics. If a lecturer couldn't, there would be no point on having the lecture.
There are a few key factors that allowed me to succeed, and that I think they are prerequisites to try to teach a subject you previously don't know enough:
* Your own self-teaching on the subject must be able to outperform by several orders of magnitude the self-teaching ability of your students. Even if your students are intelligent and well prepared, usually you can outperform them because you are more mature, have a lot of experience on related subjects and know better how to study complicated subjects. Once a fellow lecturer who had a PhD in mathematics was worried because she had nearly never studied statistics but she had been assigned a basic statistics course for freshmen engineers. My advise to her was that if she took with her the course book while commuting (she commuted by train), she would know the subject before arriving home that evening. I think she did and the course went very well.
* The course syllabus and material should be prepared beforehand by anybody more knowledgeable. Changing the syllabus or the book or even preparing exercises and slides is way more difficult than teaching lessons. If there is an established path, I strongly recommend to follow it the first time you teach a course unless you are very sure about what you are doing.
* And last but not least, time. You must be willing to invest a lot of time studying the subject - a lot more than your students do. Of course, that means that if you teach to earn a living, teaching a subject you need to learn is a very inefficient way to invest your time to make money. For me that's not a big problem because being a part time lecturer in my country is such an inefficient way to make money that it can't be noticeably worsened, but other people might see it in a different way.
In summary: It is possible to teach such a course if you can outperform the students learning, if the course has been prepared and if you want to invest a lot of time in it. It's a real challenge, and you will only enjoy it if you are willing to take the challenge; otherwise it could be a real nightmare.
And an end note: I've done this in on-line and off-line courses. In on-line courses it is a bit easier because you don't need to answer difficult students questions in the spot, and you can research the answer for hours if needed.
Upvotes: 2 <issue_comment>username_9: You'll need to learn as much as you can before you start, *and* then work hard on it during the whole semester.
I've been dumped into new (as in never before taught) subjects at the last minute where nothing beyond the course description existed a few weeks before the first class (as in "Hey, would you mind teaching this? Starts in ... oh, actually 23 days from now. You might want to think about what textbook you use. What do you think about this one? By the way, you have to submit your draft unit guide for review by Thursday." ... how you might possibly organize it so the students can actually *obtain* the text in time for the class is left as an exercise for the reader).
This sort of thing has happened to me several times over the years. I wasn't completely unfamiliar with the material, but each time there were some substantial sections of the course I didn't really know nearly well enough to teach at the time I was asked if I would teach it.
To teach it well, you'll have to know the material pretty well by the time you cover it in class (and usually a good deal earlier if you're writing assignments/tests/exams) -- in the most recent case, I learned it well enough to identify many of the errors in the text for the students... and there were quite a few mistakes in some chapters.
Expect to work at least 4 to 5 times as hard as the better students do on the parts of the subject you don't know really well (after all, they have someone explaining it to them, and they only have to answer things you have to both figure out to ask and answer, and give ideal solutions for -- that's a whole different level of knowledge).
If you get to teach it a second time, that will go much easier.
Upvotes: 2 <issue_comment>username_10: Having done both I can only recommend learning it before teaching. You don't need to master everything, but you also should be able to answer spontaneous questions about the subject. And if you're borrowing slides from someone else, you really should know what you're doing! Otherwise you might bump into these problems:
* You don't know where you should give examples instead of rushing through slides and suddenly you've flashed every slide of the course in one lecture. Just because someone writes one slide for every 30 minutes and you're used to showing one slide per 5 minutes.
* You probably misinterpret something because you're not familiar with the subject.
* You cannot go deeper into the subject than the few bullets in the slides.
* You end up reading the bullets aloud, something you should never do.
I always learn new things when I teach, so you really can't master everything beforehand, but that's the fun in it.
Upvotes: 1
|
2017/03/24
| 458
| 1,944
|
<issue_start>username_0: In my university, which is in Canada, TAs are part-time positions for graduate students as far as I know. So do full-time TAs exist like other faculty members?<issue_comment>username_1: According to my knowledge, No. As part of your graduate student funding package in Canada, the TA Fellowship is a part-time position for the semester and/or semesters in which you are appointed.
Note: You can TA several courses.
Note: In known cases, you may be given the position of the lecturer (if you adequately prove your capabilities) during your post graduate studies. I believe the pay is twice that of the TA position.
Upvotes: 1 <issue_comment>username_2: I'm aware of at least one R1 University in the US that allows some graduate students to be full time (100% equivalent appointment as a TA) during the summer, if they have no classes or other position, and if teaching needs require it. But in any case this was considered quite rare. My understanding was that the typical case was when the student was to be an instructor of record for a course, particularly one they designed or won a grant to develop, and usually they were also part of a teaching-preparation/certificate program. Even then the person would generally only hold such an appointment for one summer only.
At least in the US, part of the reason why this isn't more common is employment rules - a full-time teaching assistant would be a de facto employee of the University, and entitled to full rights and compensation for that time period, unlike the odd "not exactly just a student, not exactly an employee" category that graduate students possess in the US.
As GEdgar noted in comments, there are full-time people who are not full faculty in the US, but they are not called a TA (which is a title reserved for graduate students doing work part-time while also being a student); these people hold titles such as Instructor, Adjunct, etc.
Upvotes: 2
|
2017/03/24
| 663
| 2,743
|
<issue_start>username_0: I had an academic job interview for a tenure-track faculty position in January but have not heard anything yet (March now). Should I send an email asking for the status of the application? Can anyone please tell me what a proper email could be?
Thanks very much for the help.<issue_comment>username_1: Ideally, towards the end of your visit in January the search chair would have let you know the projected timeline. Unfortunately that doesn't always happen.
In any case, I don't think it's unreasonable to drop them a quick email to ask if there are any updates. Before you do, however, be aware that you are likely to get an evasive answer or a non-answer. There are three likely reasons that you haven't heard anything:
1. They don't want to hire you, but they forgot to tell you.
2. They have offered the job to someone else, but are keeping you around as a backup in case the first person turns them down. They may be currently negotiating with the first person.
3. They are still interviewing candidates, and won't decide for a while.
In case 1, you'll get a definite response. In cases 2 and 3, you won't learn anything, although they might be able to give you a rough estimate of when to expect to hear from them.
**NB:** If you have been offered a job at another institution, but you are still interested in this position, do definitely tell them. They may be able to speed up the process on their end to give you some more information.
Upvotes: 3 <issue_comment>username_2: I don't think there is a down side to emailing, although if they are not communicating with you it is not a good sign for your prospects. Of my 5 faculty rejections, 3 emailed me or called me, and 2 required me to email the search chair.
A possible email could use the language below. Parts of the email that are optional/circumstance-specific are in brackets.
Dear Chair,
Good afternoon! Thank you again for having me out to visit SCHOOL in January. I am writing to inquire about your decisionmaking timeline. [OR I am writing to let you know that I have received an offer from another institution.]
[My timeline for accepting this other offer is approximately X. I wonder if I could receive a response regarding your search within that time frame.]
[Is there anything you can tell me about the status of the search? From the timeline that you mentioned during my visit, I surmise that the school has made an offer to another candidate. Has that candidate accepted, or is the search still ongoing?]
Thank you once again for considering me for the position. I look forward to hearing from you.
Sincerely,
NAME
Thanks to <NAME>'s book *The Professor is In* for some of this language.
Upvotes: 3 [selected_answer]
|
2017/03/24
| 517
| 2,226
|
<issue_start>username_0: I am currently working on my thesis and I am submitting a part of my thesis as a paper to a journal (same text no rephrasing in some parts). I believe the journal review takes about four to six months before it is published and my thesis will be published on my university website before that. When they check for plagiarism and they find a match between my paper and my thesis, will they reject it?<issue_comment>username_1: It is only (self-)plagiarism when you do not indicate that some text is being reused.
So there is a simple remedy: Add to your thesis some text that indicates which parts have been submitted where. This could be formulated for example as:
>
> This chapter has been submitted to *journal* as *authors*, *title*
>
>
>
Upvotes: 2 <issue_comment>username_2: The best course of action here is to disclose that part of the work is being submitted for publication in the thesis. In fact, this is encouraged at my institution. It will make your thesis stronger if findings have been peer reviewed and the examiners will recognise that it has been deemed publication quality. One of their goals is to assess whether it is an original finding which a publication would support.
As for self-citation and self-plagiarism, that is a very difficult issue and some fields take presenting the same work without attributing the prior source very seriously. Consult your advisor for what is acceptable in your field. However, generally what you need to avoid is:
1. Submitting the same work for publication in different journals
2. Submitting the same work to fulfil the requirements of different degrees (e.g., Masters and PhD).
3. Submitting the work of others (even coauthors) for your thesis.
I don't think including sections in both an article and thesis breaches these necessarily and may be acceptable in some fields. Thesis by publication is also becoming more widely acceptable. However, you do need to ensure that it is clear what your contribution to a publication was if it included verbatim in your thesis if there were coauthors (and their contributions are acknowledged and disclosed). Afterall, it is your work that is being assessed to fulfil your degree.
Upvotes: 2
|
2017/03/24
| 4,314
| 18,549
|
<issue_start>username_0: In the next academic year,
I'll be teaching a class on how to use a statistical software package
([the R programming language](https://en.wikipedia.org/wiki/R_(programming_language))).
I foresee that each class will have two parts:
a lecture and software demonstration,
and time for the students to get their hands dirty
by trying the software themselves.
I now have to decide what type of classroom to request.
**Option 1: Computer laboratory**
If I hold the class in a computer laboratory,
the advantage is that every student will have a computer
with the software already installed.
The disadvantage is that the classroom layout
is optimized for students to use the computers,
so it is difficult to lecture.
I foresee that it would be difficult for students sitting in the corners
to see the projector screen when I am speaking.
**Option 2: Lecture theater**
If I hold the class in a lecture theater,
the advantage is that every student will be able to see the projector screen when I am lecturing and demonstrating the software.
The disadvantage is that students would have to
bring their own laptops once a week.
Although I estimate that at least 98% of students own a laptop,
I don't want to embarrass or cause difficulties to any students who do not.
So bearing these two options in mind,
which type of classroom should I request?
Is it reasonable for me to request the lecture theater,
knowing that this will be inconvenient for students
who don't normally bring their laptops to university?
### Edits
There will be around 90 students in the class.
My plan is to ask the students to form groups of 3-4 students,
so that they will help one another as they do coding and data analysis.
I will probably have a TA to help to run the lab session
given that there are many students in the class.
I've attached a floor plan of the only computer lab in my university
which is able to accommodate 90 students.
As you can see, if I am trying to give a lecture in this lab,
it can be very hard for me to have eye contact with the students,
and to perceive any confusion or uncertainty in their minds.
[](https://i.stack.imgur.com/EDNIv.png)
### What I decided to do
After considering the input provided by the various answers and comments,
I decided to adopt a **hybrid** approach,
i.e., 1 hour of lecture followed immediately by 2 hours of computer lab time.
During the lecture time,
I will be showing the students how to use the software
and how to estimate and interpret statistical models.
During the computer lab time,
students will apply what I taught in the lecture to a different data set.<issue_comment>username_1: This is not a direct answer, but I'd suggest an alternative.
I've found that if all students work on their computers I have problems judging their progress and can also only help one student at a time. Thus, when I teach R, I typically don't require computers for the students. Instead, I connect a laptop to a projector and ask one or two students per lecture to solve some practical problems on it and explain their approach. If they get stuck (they usually do), I ask for suggestions from the other students or explain how to find a solution (e.g., by explaining documentation). That way I don't only teach the solution but also ways of deriving the solution.
It's important with this approach to create an environment, where nobody gets embarrassed (it helps that everyone can be in front next time), but I believe it is more conductive to learning. Of course, the students also get homework.
I typically teach smaller classes of graduate students.
Upvotes: 5 <issue_comment>username_2: Would it be possible to have classes split between them? So say two a week in the classroom with one working as a tutorial in the computer lab. This was how most of my own coding courses were split as the theory could be given in a classroom environment with examples being done by the lecturer in the in class computer/lecturer's laptop.
Then the tutorial was essentially a designated practice session were students worked on preassigned problems/class projects. A lecturer or a grad student would walk around the lab occasionally checking on how people were doing and helping anyone who wanted it in a one on one fashion. The lecturer only really spoke to the entire class if something came up that they thought might be an issue for the majority of the class (if several people were making the same mistake etc.)
Personally I would not have them figure out solutions in front of the entire class, the lab style class where everyone has their own computer to pay attention to works better for supervised practice I have found.
Upvotes: 4 <issue_comment>username_3: I only taught one course with heavy computer use in a computer lab. Luckily in this lab there were hexagonal tables with 6 computers each, so after a short time people started to work together, which was really beneficial. In most labs you have computers in long rows next to each other, discouraging people from collaborating. So if you allow groups of 2-3 students working together, they might learn faster (apart from the few that just let themselves get dragged through), you have an easy way of monitoring their work (just stand nextby and listen to them), and the chance that among 3 people nobody can get hold of a laptop is probably negligible.
Upvotes: 2 <issue_comment>username_4: I would go with the computer room option. There are several issues with getting everyone to bring a laptop:
* As you mention, a few students may not have one, and this could cause embarrassment.
* People are using different operating systems, and may have different versions of the software installed. This increases the chance of code working on some computers but not others, which is always tricky to deal with as the lecturer.
* High chance that at least a few students will forget their laptops, or their power supplies.
To deal with the issue of the computer room being poorly laid out for lecturing purposes, I would suggest providing all the materials (powerpoint, code scripts, etc) to the students electronically. This way they can have everything displayed on their screens and can follow along even if they don't have a good view of the projector screen.
Upvotes: 7 <issue_comment>username_5: **It is only reasonable to require a laptop if it is specified in the class or course requirements**
It is not reasonable to expect students to have their own laptops unless this was specified at the time they signed up to the class or, alternatively, it is required for the course overall (or the course provides them). Otherwise you are springing an additional, and expensive, requirement on students who may not expect this.
Even then, this is a necessary condition not a sufficient one. You cannot avoid excluding students who cannot afford a laptop, for example, and you're requiring students to lug heavy equipment around with them for a single class.
Upvotes: 5 <issue_comment>username_6: I actually think you've got an [XY problem](https://meta.stackexchange.com/questions/66377/what-is-the-xy-problem) here. From my experience of learning software languages, this doesn't feel like the most practical way of running it.
In general for learning a programming language, there should be a lot of practical time and a relatively small amount of lecture time. More than that, the practical time needs to be rather free-form and allow scope for better students to complete exercises quickly and slower students to take much longer over it. Simply dividing your class into two halves cannot ever cope with this - either it wastes time or holds back quicker students, or it prevents slower students completing exercises.
When I was at uni, the better solution was one lecture in a classroom, followed (same day, different day, doesn't matter) with an entire afternoon in a computer lab. For the computer lab session, the lecturer often wasn't there, but one or more teaching assistants were always on hand to answer questions, review code, and generally point people in the right direction. After that fixed time, the lecturer and teaching assistants left, but students who hadn't finished could naturally keep working on their own. Most students didn't have laptops at the time (this was the mid 90s!) but there was nothing in principle to stop people bringing in their own machines.
**Edit based on your changes to the question:** With your new information, it makes it even clearer that your original proposal simply won't work, and two separate exercises is the way to solve it. Getting information to 90 people is a lecture theatre job. Even dividing this into 4 sessions, you cannot make that work in a computer lab, and the lecturing component will take four times as much of your time. Anything other than a lecture theatre *will* fail, waste your time, and waste your students' time. Conversely, on the practical side there's no other option than dividing the group up so that they all get a guaranteed seat in the computer lab at some point.
Upvotes: 7 [selected_answer]<issue_comment>username_7: I'm going to post the IT answer here. Go with the computer lab so you can spend class time getting things done and not supporting installation issues.
Forget displaying your slides on a projector, use some type of collab software to display your slides on every computer that is sitting in front of the students.
Don't have them use their computer because they probably have chat programs and facebook to distract them. Provide the gear, lessen the distractions and leave tech support to the lab guys.
All that said, the answer by @username_1 has some real promise in getting the group working as a group instead of as a bunch of individuals. The only problem I see is that the slower ones are going to seriously frustrate the best and brightest and conversely the slower kids are going to feel like they've been dragged behind the bus if one of the smarter kids is doing the typing.
Never underestimate pair programming in this case where the brightest can help the slowest without it becoming too excruciating.
Upvotes: 2 <issue_comment>username_8: Ideally the classroom should already be set up with everything needed for the lectures, however this is not always possible. If the classroom isn't already set up for laptop usage (as with most campuses built before about y2k), often times these classes are split into a lecture and a lab. For scheduling purposes, this way can be extremely beneficial to students trying to graduate within a fixed time frame and minimize their student debt because they can usually fit one of the labs into a packed schedule. This works well in large departments with plenty of graduate assistants to run the labs, so if that option is available, take advantage of it.
That being said, there are [pros and cons to having laptops in the classroom](https://teachingcenter.wustl.edu/2015/08/laptop-use-effects-learning-attention/). If their usage is limited to course related material (not including note taking), then it is generally good for learning.
So long as you set aside a time for in-class work (it may take several minutes for some systems to come up) and allow for partnering or group collaboration (unless laptops are mandatory for the class or department) then it shouldn't be a problem. Even if laptops are required, the group interaction can help shy or new students meet the person/people who will be their future study partner(s).
I hinted that [note taking on a device](http://www.npr.org/2016/04/17/474525392/attention-students-put-your-laptops-away) can be [counterproductive](https://www.scientificamerican.com/article/a-learning-secret-don-t-take-notes-with-a-laptop/), so if you can make lectures and notes available, it will allow greater concentration. If the lectures are not already available, they can be divided among the class members to transcribe. When I was an undergrad in nuclear engineering, we organized this on our own and it worked so well that the group list became a sort of reply-all mailing list to correct any misconceptions.
Upvotes: 1 <issue_comment>username_9: I have supported colleagues teaching R and a requirement was that students would bring their own laptops to use. These were postgraduate students studying for professional development (ecologists and environmental managers etc).
This was under the following conditions:
* The requirement for a laptop was made clear upfront prior to enrolment on the course.
* Minimum IT requirements for the course (including links to the R community site) were given upfront.
* No technical support was available from the university (other than supporting students in accessing Wi-Fi networks). This too was made clear upfront.
* Prior teaching, guidance for installing the R programming environment was provided together with some orientation activities (I vaguely recall the community site having this - but might be wrong).
* The teaching room was booked nearby to a PC room (which was booked as well) in case issues meant we needed to relocate.
* Two or three university laptops were booked just in case some students had issues (we didn't tell them this upfront).
The pedagogical advantage of this approach was that we were teaching students how to use software *on their own systems,* which would be the systems they'd use at home/work. Hence, users were empowered to troubleshoot issues in an environment that they would encounter again and again.
They were also adults paying their own fees, so were grown-up enough to turn off distractions. Also, as professionals they'd want to check email during breaks.
The class size was up to 25 students with one instructor and two facilitators (lucky them!).
In practice there were invariably students who didn't do the pre-work, brought sub-spec machines (I recall a beaten up XP laptop from 2005 raising some eyebrows) etc this was mitigated by university laptops and PCs.
My advice is:
* Be very clear up about all requirements upfront.
* Have a rationale for students to use their own laptops.
* Have a back-up plan.
Ultimately, if a student fails to take responsibility for ensuring they meet the IT requirements as specified by the course, it is likely they won't be able to learn and the tutors shouldn't be expected to make special provision for them.
Upvotes: 2 <issue_comment>username_10: Generally speaking, this request does not sound like a good idea. Laptops can be a significant cost for a student. Some may not have a laptop. Some may have a device that they already use for portable computing needs, but which is unsuitable for development (like a tablet). These people will then be forced to purchase a laptop just for your class - students already balk at paying $100-200 for "required" textbooks, how do you think $1000 for a laptop will go over?
The exception is if your school provides laptops to the students. But consider that cost is not the only factor. When I started college, I was very excited that I could finally bring my laptop to class and take notes more effectively, as we were not allowed to use devices in high school. After two days of carrying the heavy laptop around campus, my back was in excruciating pain. I never brought a laptop to school ever again unless I really had to. Keep in mind that I am healthy and have a strong back - what about students who have injury or chronic back pain and cannot carry heavy bags at all?
You could of course demand laptops regardless, and I doubt there would be repercussions even if it is unreasonable. But it is virtually guaranteed that with 90 people, you will have some sort of issue like those I describe above, it will not be as simple as asking everyone to bring one.
---
By way of suggesting a solution, I think Option 1 sounds great. You have astutely stated the key advantages. I will add that some lecture halls have small tables which make working with a laptop clumsy and unergonomic. Also, not only do students have to set up the software, it has to be compatible with their system - by using lab computers you get around this problem.
Your main objection to this appears to be the projector screen. First of all, most students will see it to begin with. For those who can't see and can't sit at a better location, you can distribute a PDF of your slides. You can also distribute paper handouts. If you are worried about paper wastage, you can print only a few copies for those who ask for them. If you print them one sided students can also recycle the handouts and use the back as scratch paper or for note taking.
If you decide to do Option 2, you should modify your policy and say laptops are strongly encouraged but not required, and explain why you believe having the laptop is helpful. However your lecture should be designed such that it is possible for anyone to follow without a laptop. If the demonstrations are essential, it should be possible for the student to take notes and then do the demonstrations at home.
Most of your students will probably be able and willing to bring the laptop. The rest can take notes and follow up in their own time at home or a library computer.
---
Another suggestion since you intend to teach R: Rstudio has a Server version, where the R code lives on a central machine and users interact with it through a webpage. This way users need only a JS-capable browser, no installation necessary. The web interface looks very similar to Rstudio, so when the students are done with the class they can figure out how to get R set up on their machine, and practically all that they learned in your class from the webapp will transfer.
Upvotes: 4 <issue_comment>username_11: We solved this problem by creating hybrids.
You can remove some workstations from a lab and create room for laptops. You can add workstations to a lecture hall. Then the students can do what works for them.
The best answer is to give students options not dictate their needs.
Upvotes: 1 <issue_comment>username_12: Go for the lab.
It is undeniable that a *computer laboratory* was built to *teach stuff on computers*. I'd recommend that everyone brings their own USB drive to store files or even bring their own computer, in which case you are sure to have proper power outlets for everyone.
The lab is only a nuisance if your IT department can't keep the stations in working order or if they are too old even for a simple class.
Upvotes: 1
|
2017/03/24
| 696
| 3,034
|
<issue_start>username_0: I am in an MBA program and last week had a two part test for a marketing class. I was under the impression the first part was just analysis, essentially creating models and writing notes on trends so you can quickly and easily answer the questions on the test (take home, given 3 hours to complete during the regularly scheduled class time).
The first part was much more extensive and actually had 6 questions with 3 sub questions per question to answer and I would've liked to have spent 12-15 hours on it versus the 8 I allotted myself. Because of *my* poor planning, I produced incomplete work and the completed work was NOT up the quality I usually produce and expect of myself. I did email my professor to ask for a possible extension and explain my situation. He would take off 25% for turning it in the next day, I did not take this option as I felt I could do better than that turning it in by 11 pm that night.
I am embarrassed and humbled. My professor is a very nice guy and a great teacher.
**Would it be appropriate to schedule a time to meet with him to discuss the quality of my exam and how I would have approached it differently?**
I am not looking for brownie points or extra credit or anything of the like. I just can't stop thinking about how I would have approached some of the questions differently. My ego took a hit and I'm essentially looking for redemption in a way.<issue_comment>username_1: Yes. If you would like to discuss the exam, schedule a meeting and talk with your professor about it. It seems that you have already discovered the core issues impacting your performance here - a lack of sufficient planning and the resulting lack of time - so it may be helpful to consider what you hope to learn from such a meeting. Are you hoping your professor knows some way to accomplish more work in less time? Are you looking for some confirmation that you really couldn't have done better? Your professor may have useful feedback but will likely agree with your assessments about planning.
Upvotes: 2 <issue_comment>username_2: There is one issue that might be worth discussing with the professor: Was the test expected to take as long as you estimate it would have taken you?
If so, you just need to improve your project planning, to allow for the possibility that an important task with a hard deadline will take longer than expected.
If the test should have only taken 8 hours for a properly prepared student, you have a more serious problem.
Upvotes: 2 <issue_comment>username_3: Well your prof is not your therapist but s/he might consider it all in a day's work anyway. Marks to be deserved for every part of a question were always indicated clearly on my exam. The 100% to be earned was thus visibly chopped up in fragments no larger than 5%. Students could and would time-manage accordingly. If your exam paper was not laid out that way, suggesting this to the prof would be an objectively sensible thing to do (and might balm your ego in the process).
Upvotes: -1
|
2017/03/24
| 405
| 1,774
|
<issue_start>username_0: What would be a typical cost for a donor to start up an endowed chair for a professor (an MD) in a medical school? Are there ongoing costs?<issue_comment>username_1: Yes. If you would like to discuss the exam, schedule a meeting and talk with your professor about it. It seems that you have already discovered the core issues impacting your performance here - a lack of sufficient planning and the resulting lack of time - so it may be helpful to consider what you hope to learn from such a meeting. Are you hoping your professor knows some way to accomplish more work in less time? Are you looking for some confirmation that you really couldn't have done better? Your professor may have useful feedback but will likely agree with your assessments about planning.
Upvotes: 2 <issue_comment>username_2: There is one issue that might be worth discussing with the professor: Was the test expected to take as long as you estimate it would have taken you?
If so, you just need to improve your project planning, to allow for the possibility that an important task with a hard deadline will take longer than expected.
If the test should have only taken 8 hours for a properly prepared student, you have a more serious problem.
Upvotes: 2 <issue_comment>username_3: Well your prof is not your therapist but s/he might consider it all in a day's work anyway. Marks to be deserved for every part of a question were always indicated clearly on my exam. The 100% to be earned was thus visibly chopped up in fragments no larger than 5%. Students could and would time-manage accordingly. If your exam paper was not laid out that way, suggesting this to the prof would be an objectively sensible thing to do (and might balm your ego in the process).
Upvotes: -1
|
2017/03/24
| 1,089
| 4,561
|
<issue_start>username_0: Background: Just completed my PhD, have to teach one part of a graduate course in EE. Remaining course is taught by another senior prof. Course has two classes, totalling 3 hours per week, and one assignment every week which involves one or more of simulation (SPICE), some programming (MATLAB) and some hand-calculation / circuit design. The assignments are worth 40% of the course eval, the remaining being in-class exams.
How long should each assignment be? For instance, I can design the assignment such that it takes the average student (who has attended the lectures) X hours of effort. How do I choose X?<issue_comment>username_1: I use a rule of thumb that out-of-class work, including assigned reading, should be about twice the in-class hours. So, a class with 2-1/2 hours of classroom work per week should have about five hours of outside work per week.
For a 15-hour load with 2-1/2 contact hours per three credit course, that works out to 37-1/2 hours a week, a fair approximation of full time.
For any class, some students will find the work to be easy and it'll take less time than you estimate. Others will find it difficult and it will take *much* more time. You're looking for the median and a fairly low standard deviation.
Early in my teaching career, I asked some classes to keep a journal in which they recorded the time spent out of class. The responses seemed highly suspect, so now I just talk to some of the better students from time to time and ask about how long they spend.
Upvotes: 3 <issue_comment>username_2: For undergraduate courses, the usual rule of thumb is 2 hours of work per hour of lecture. As a (US) undergraduate is considered full-time if they are taking 12 credit hours, this jives well: 12 hrs lecture + 2\*12hrs work = 36 hrs /week, which is roughly equivalent to a full-time job.
On this basis, consider that a (US) graduate student is generally considered a full-time student if they are enrolled in 9 credit hours. If we consider full-time to equate to ~40hrs of involvement per week, we find that the multiplier should be closer to 3: 9 hrs lecture + 3\*9 hrs work = 36 hrs/week. Given that you are placing 40% of the grade on assignments, the extra time seems reasonable.
As always, however, feedback from your trusted students should steer the length of assignments. Several other considerations worth factoring in:
* Are your students primarily TAs / RAs / employed in industry? Are they actively working on dissertations? What sort of time demands are involved by these other activities?
* Some graduate level problems can be difficult to solve in a consistent amount of time---a typical assignment is harder to break apart as "solve for x in these 20 problems." From my personal experience as a graduate student, the time I took to complete a graduate level assignment was much more variable.
* A useful estimation for gauging the length of an assignment is to honestly complete the assignment yourself (writing out complete sentences, commenting code written from scratch, etc), preferably a few days after writing up the assignment so the problems aren't fresh in your mind, and time how it takes you to produce solutions. This is a baseline for how fast a student could ever possibly complete the assignment; multiply this by an appropriate factor to account for thinking / debugging / erasing, etc. I have personally found that a multiplier of 5 to 6 works well for me.
Upvotes: 2 <issue_comment>username_3: I have heard before that two-three hours of homework per hour of class is appropriate. However, I think that a more effective way of looking at homework is how meaningful and thought-provoking the assignment is.
Students today are very busy - many have families and jobs outside of class. If they view their weekly 6-9 hours of homework as busy work, they won't do it or they will do it with minimal attention. My suggestion is to make specific goals for each week's assignment ("I want them to be able to do X after today's lecture" or "I want them to know A, B, and C before the next lecture.") and then build the homework assignments around that. Research shows that particularly millennial students value assignments that "get to the point." So, I design reading and writing assignments that convey the goals I want in the least amount of time. So, my students only have about 2-3 hours of reading and writing each week outside of their 3-hour course. However, they always show up prepared and ready to apply their home assignments in-class!
Upvotes: 1
|
2017/03/25
| 339
| 1,584
|
<issue_start>username_0: I am working in a company after completing my PhD. When I get time, I carry out some research work related to my PhD work but not related to what I do in the company. I look forward to publish the work in journals/conferences as an independent author. I guess I should use the present affiliation in the article. Will this affect the chances of getting the article published?<issue_comment>username_1: I also work in a company(have a PhD too). My company strictly rules that any invention, discovery that I have done whilst in the company becomes a shared intellectual property with the company. I think you should talk with your immediate work supervisor or Intellectual property department(if you have one) before you embark on publishing your work.
As for the possibility for the paper being published, I personally think journal articles are published based on their originality and degree of importance. Your current affiliation could be slightly helpful but I think there are a host of factors that determine the final conclusion, i.e, publication of the article.
Upvotes: 0 <issue_comment>username_2: I used "Independent Researcher" and personal email address two times for my papers: published at good venues without any affiliation-related problems.
And depending on your local laws, the company policies, nature of your research, how you conducted it (normal working hours or weekends, use of company equipment) you may not own the research results and may need to work with the company on publishing it using the company's affiliation.
Upvotes: 2
|
2017/03/25
| 2,654
| 10,905
|
<issue_start>username_0: What is the best way to establish your authority as the instructor in a classroom?
I am asking from the perspective of a graduate student who will be TA-ing a large class for the first time. But I think similar advice could be said for new assistant professors who still look like students.<issue_comment>username_1: The position of TA automatically comes with a certain amount of authority, which is referred to as legitimate power. Having said that, you can increase your perceived amount of authority by demonstrating expertise in the subject area (expert power). Expertise needs to be combined with the ability to actually communicate the complex material as well. It is much harder to communicate complex information than it is to be an expert in complex information.
In addition to these skills, authority can be further enhanced through softer skills such as demonstrating care for the students and providing additional support when necessary (referent power).
Upvotes: 7 [selected_answer]<issue_comment>username_2: I believe that authority can be established, as I believe the question was directed more at the onset of authority in a single class type setting (eg: right before class begins, chaotic scene/ everyone talking amongst themselves) quickly by the instructor/teacher noticeably raising his/her voice louder than the conversational volume, and addressing then group in a louder, deeper, authoritative tone.
Not certain if I remember correctly reading the works of the Lakov, which defines the common audio boundaries we have and set, makes for very interesting reading nonetheless.
Upvotes: 2 <issue_comment>username_3: Don't allow people to interrupt you when you are being instructive. Other than that, being good at explaining things helps a lot. While the students might not seem like it, they do want to learn the material. At least insofar as is necessary for the assignments.
Upvotes: 1 <issue_comment>username_4: Make it clear to the students that attendance is not factored into their grade. If students are being disruptive, remind them that they can leave if they don't want to be there, but if they stay, you expect not to have to talk over them. This speech has solved every behavior problem I've ever had to deal with, but if the problem persists after that, you can make the implicit threat explicit by docking points from whoever is causing problems. (Make sure you have authority to do this.)
Upvotes: 2 <issue_comment>username_5: For the large majority of students who are in the class because they *want* to learn, you don't need to "establish" your authority. You already have it at the start of the course, simply because you are the TA. The thing you have to do is to avoid *losing* that established authority.
There are two important ways not to lose it. The first (which may only be obvious to you after somebody has stated the obvious!) is *to be aware that you already have it*, in exactly the same way that if somebody walks into a room wearing a police uniform, they already have authority simply because of "that badge." Of course TA's don't usually wear any distinctive uniform, but it should be clear enough to most people that you *are* the TA, and not just another student taking the course!
The second way is basically common sense: don't ask the students to do stupid stuff, and don't behave in generally unpredictable or irrational ways.
If you are the person "in charge", other people will *expect* you to take charge, and *tell them* what you want them to do - they aren't mind readers! If you have progressed through the education system as far as becoming a TA, you already have a lot of experience of other teachers and lecturers demonstrating that type of behavior (some more competently than others, of course). The only unfamiliar part of the scenario is that *you* are now the person in charge, not somebody else!
Upvotes: 5 <issue_comment>username_6: Acting is a big part of teaching. Thus, I strongly recommend the following book
* <NAME>, [Impro: Improvisation and the Theatre](https://books.google.com/books/about/Impro.html?id=EVmminvaWDQC), Routledge, November 2012.
which contains gems such as these:
---
>
> We've all observed different kinds of teachers, so if I describe three
> types of status players commonly found in the teaching profession you
> may find that you already know exactly what I mean.
>
>
> I remember **one teacher**, **whom we liked but who couldn't keep**
> **discipline**. The Headmaster made it obvious that he wanted to fire
> him, and we decided we'd better behave. Next lesson we sat in a
> spooky silence for about five minutes, and then one by one we began to
> fool about — boys jumping from table to table, acetylene-gas exploding
> in the sink, and so on. Finally, our teacher was given an excellent
> reference just to get rid of him, and he landed a headmastership at
> the other end of the county. We were left with the paradox that our
> behaviour had nothing to do with our conscious intention.
>
>
> Another teacher, who was generally disliked, never punished and yet exerted a
> ruthless discipline. In the street he walked with fixity of purpose,
> striding along and stabbing people with his eyes. Without punishing,
> or making threats, he filled us with terror. We discussed with awe
> how terrible life must be for his own children.
>
>
> A third teacher, who was much loved, never punished but kept
> excellent discipline, while remaining very human. He would joke with
> us, and then impose a mysterious stillness. In the street he looked
> upright, but relaxed, and he smiled easily.
>
>
> I thought about these teachers a lot, but I couldn't understand the
> forces operating on us. I would now say that the incompetent teacher
> was a low-status player: **he twitched**, **he made many unnecessary**
> **movements**, **he went red at the slightest annoyance**, and **he always seemed like an intruder in the classroom**. The one who filled us with terror was a compulsive high-status player. The third was a status expert,
> raising and lowering his status with great skill. The pleasure
> attached to misbehaving comes partly from the status changes you make
> in your teacher. All those jokes on teacher are to make him drop in
> status. The third teacher could cope easily with any situation by
> changing his status first.
>
>
>
---
>
> Again I change my behaviour and become authoritative. I ask them what
> I've done to create this change in my relation with them, and
> whatever they guess to be the reason — 'You're holding eye contact',
> 'You're sitting straighter' — I stop doing, yet the effect continues.
> Finally I explain that I'm keeping my head still whenever I speak, and
> that this produces great changes in the way I perceive myself and am
> perceived by others. I suggest you try it now with anyone you're with.
> Some people find it impossible to speak with a still head, and more
> curiously, some students maintain that it's still while they're
> actually jerking it about. I let such students practise in front of a
> mirror, or I use videotape. **Actors needing authority — tragic heroes
> and so on — have to learn this still head trick**. You can talk and
> waggle your head about if you play the gravedigger, but not if you
> play Hamlet. Officers are trained not to move the head while issuing
> commands.
>
>
>
---
Upvotes: 3 <issue_comment>username_4: I think a mix of knowledge/competence in ones field of study and confidence are key.
You want to not appear incompetent to keep your initial authority. You want to bring up new and interesting things in the subject of study to motivate and inspire, which is only possible if one with sufficient knowledge of the subject.
As an example: If asked a question, admitting you do not know is better than bs- ing. But knowing the answer is still better.
Leading by example is best. Certain things are permissible and go unsaid. But if being interrupted, one needs enough authority to lay down some laws/ ground rules. Speak loudly and clearly to command attention and have the back of the class hear you without being dictatorial.
Relatability can help. Ask how their day is going and aim for natural human connections more than a business relationship. This can give you respect without "asking" to command it.
Lastly, pay attention to their reactions. Are they paying attention or distracted? Do they understand and follow what you are saying? It's best to think of how you were in a similar class as an undergrad student.
Upvotes: 2 <issue_comment>username_7: Know how the class was taught in the past and what students may be expecting. If the class has been taught in a specific way in the past, there will be expectations about how it will run and be taught. These may be good (if the class was taught well and run well) or they may be bad (if the class was not previously taught or run very well). Taking into account what the students expect or hope for will help to establish you as an effective (and thus authoritative) teacher.
Also, as others have mentioned: know how to use the teaching materials. If you use a whiteboard/blackboard: start at one side, use it however you want but then erase the whole thing or a panel at a time and continue on. Dont empty a small space in the middle and continue writing in the small space...
This is an unusual example, but it was the downfall of one of my professors:
We had a 2-semester project as part of the final degree requirement and there was a class that went along with this. In the first semester it was only actually held about once a month and only for about 15-30 minutes out of its hour slot. This was sufficient for the instructor to communicate what was needed for the project deadlines and to answer questions. In the second semester we had a new professor teaching it and he really tried so hard but due to the expectation that it would be an infrequent and short class... he failed hard and no one really put effort into the things he assigned past getting the "tick marks" to get the grade. Students felt as if the extended classes were not useful and wasteful of their project-time... I feel bad for him as students were actually quite rude to him in communicating this.
(The relevance of this is that he was NOT respected by the students in this class because the students felt as if the class was wasting their time.)
Upvotes: 0 <issue_comment>username_8: I can give you some (humorous but valid) advice on what *not* to do, based on our teaching workshops. These are from decades ago, but they have stuck with me:
1. If you are short in stature, do not jump to erase writing on the higher bits of the board. It "undermines your authority".
2. Always erase using vertical (up and down) strokes. Horizontal strokes "make your butt wiggle".
Upvotes: 0
|
2017/03/25
| 982
| 4,222
|
<issue_start>username_0: Does literature survey/review include relevant theory (from textbooks/old references) or should it only include recent publications? Should I place the theory as a separate chapter?<issue_comment>username_1: There is a difference between previous empirical findings and theory. In my experience papers that pit these in separate sections tend to be a lot clearer.
However, conventions differ a lot between disciplines. I suspect that at some point in your study program the appropriate structure of a paper was discussed. Look at that again and stick to that format.
Upvotes: 1 <issue_comment>username_2: The age of theory really does not matter. If there is a relevant contribution from the 17th century that still matters today, then it is includable. It doesn't mean you should include it, but there is no reason at all to exclude it. Because of the pressure of "publish or perish," many current articles are of poor quality. Indeed, I just wrote to authors of an article because I felt they missed a very obvious explanation.
Conversely, there were fewer scholars the further you go back in time and they were often the pinnacle of the profession. While there was less knowledge to work with, there tended to be greater rigor. There are two important reasons to quote older works. The first is when the older work implies a broader range of options than the profession has taken or unintentionally foreclosed a path of research. The second is when an important element of prior theory is wrong. A quotation from a prior author can illustrate how incorrect thinking developed. For prior theory to be wrong, it has to make perfect sense as to how it is correct. The error has to be subtle.
Your literature review should include prior theory. The scope of that review should be deep enough and broad enough to cover the discussion of your research project. You should be aware of controversies that could impact you, the timing of that discussion is irrelevant. There is no virtue to an article written in 2017 and no reason to not quote older works, even including the ancient Greek mathematical writings. There are good reasons to quote Euclid's *Elements*.
If it is in print, ever, it can be used. Relevance is up to you and your advisor. As to whether it should be in a separate chapter, we can't really help you. You are writing a book. It needs to make sense. How can you best make sense.
Upvotes: 3 [selected_answer]<issue_comment>username_3: In many disciplines there's a strong tendency to prefer quoting more recent work. This has some legitimate reasons:
* New findings/developments even in the fundamental theoretical underpinnings of the field
* Modern restatements of fundamental theory which do not dwell on issues which are no longer relevant today (e.g. disputing the existence of the Ether and Phlogiston if you're a physicist)
but there are also motivations/reasons which I believe are inappropriate:
* "I quote the book/papers I was taught from" - people who don't bother to read original statements of theories to which they only read mentions elsewhere.
* Fashion: In Sociology I know that at some point, neo-Marxism was more popular and lots of people cited that, then post-Colonialism became all the rage etc. So you would see people quoting a bunch of articles and books from 10 years back or so, almost exclusively, as though that was the alpha and the omega of social theory. If I'm mixing up my examples then never mind, it's the principle that matters.
* Timidity in quoting non-academic / primary sources: Some people seem to have been inculcated with the notion that they're only supposed to quote from respectable academic work. Not so. Often better to quote a 19th-century rabble-rousing speech than the sedate and sometimes mis-representative article who mentioned it in passing a century later.
So don't shy from quoting any source which is materially relevant.
---
PS - Always provide detailed references of course, especially to sources which are potentially obscure. If the description of your source doesn't fit typical fields, don't hesitate to add a comment to your bibliography explaining what that source is and how to locate it.
Upvotes: 1
|
2017/03/25
| 1,040
| 4,167
|
<issue_start>username_0: I am an undergraduate student in China, majoring in chemistry. I am now considering which advisors would be most suitable for my future research.
Actually, I have been working in a professor's lab since over a year at my university and I am interested in their research topics. What concerns me is that he recruits many students. He is the only professor in our group with over 10 graduate students, expected to increase to 14–16 students next year (depending on whether I join this lab). I am hesitant because I unsure whether he would have the time to provide me guidance with so many students.
How many students are typical for a professor?
[EDIT]
Our group typically hosts 1-2 undergraduates per year.<issue_comment>username_1: It depends on many factors:
* Does the professor have any intense other duties like teaching, committee work, own research, being an editor, writing grants, etc. or can they focus on supervising?
* Does the professor delegate supervisory work, e.g., do PhD students have postdocs or similar as a first person to go to?
* Is there a healthy communication and collaboration within the group? Is everybody roughly informed what everybody else’s areas of expertise are so they can seek help (e.g., from the resident programming expert) directly without going through the supervisor? Is this encouraged by the supervisor?
* How much supervision does a typical student in your field require?
* How efficient is the professor in supervising?
Due to these factors, the group size you mention can work very well, but may also be a total disaster. As it’s impossible to judge these factors as an outsider, the best way is to talk to actual supervisees and look at the graduation statistics of the group (if available). If you are actually working in a professor’s group, you should be able to assess this better than anybody else.
Upvotes: 4 <issue_comment>username_2: I have worked as a postdoctoral fellow in China for two years. My experience was mainly with two institutions -- CAS/Beijing and SCAU/Guangzhou --, but I have visited a few others informally. Thus I write here from my experience as a **visiting scholar**.
My impression is that the academia in China is strongly pressed to produce papers, and that all sorts of strategies are being employed by opportunistic PIs in face of the prospect of large short-term gains and fast career ascension. Most labs I have visited were crowded, and working 12/7, managed by 1-2 PIs only intermittently present. There is a growing interest in hiring external postdocs, where institutions are competing in salary conditions (often [not met](https://academia.stackexchange.com/questions/104541/persistent-issues-with-salary-pay-as-a-postdoc-in-china-what-can-i-do)).
At the CAS I worked in a lab with many students, each working on different projects in diverse topics. Whilst the local PI had a strong physical presence and held constant meetings, all students complained they were being demanded for results without any actual support nor apprenticeship. The numerous students were also demanded to manage the PI's paperwork, post deliveries (most *not* work-related), peer-reviews, and paper submission steps. Often on Sundays. At one period that lab had 18 students sharing the same 2-room apartment, payed from their salary... to the PI.
I heard such lab situations are quite common currently, and that fitted with my witnessing of crowded labs. I do not think this local practice is academically nor scientifically healthy.
Therefore my main advice is that you search for a more balanced laboratory, perhaps considering a different culture / location towards a more solid, sane formation. If you find yourself in a bad place, don't play their game and **leave it at once**.
Good luck!
UPDATE:
I just realised I did not answer your question objectively. The best labs I have dealt with had ca. 3-5 students per professor, most of them are typically Master students. Postdocs are independent and may assist in tutoring students, writing papers; I have seen well-reputed labs with up to 4 postdocs under the same supervisor.
Upvotes: 4 [selected_answer]
|
2017/03/25
| 506
| 1,966
|
<issue_start>username_0: I'm using a PhD student as a reference for an application, and I wonder what title I should use for her. She hasn't yet got her PhD title but is still writing her thesis. **Is there a name for that position?**<issue_comment>username_1: The title would likely be "Mr." / "Mrs." / "Ms.". There is no prepended academic title that means "will likely have a doctorate at some point".
In other news, a PhD student is probably not a good reference in the first place. You should look for somebody who has been in the game a bit longer and provide a reference that more plausibly compares you against a wider range of previous students.
Upvotes: 3 <issue_comment>username_2: You can put "ABD" after her name. From [Wikipedia](https://en.wikipedia.org/wiki/All_but_dissertation):
>
> "All but dissertation" (ABD) is a mostly unofficial term identifying a stage in the process of obtaining a research doctorate in the United States and Canada.
>
>
>
You could also say
>
> <NAME>, PhD candidate.
>
>
>
Upvotes: -1 <issue_comment>username_3: Another alternative that sounds slightly more formal and hasn't been mentioned yet is *doctoral student*.
Upvotes: 2 <issue_comment>username_4: I would use <NAME>, Ph.D. Candidate
In my social sciences field (in the USA), Ph.D. Candidate is the accepted title once you defend your dissertation prospectus. Since the process is formal and sometimes arduous, we are very careful not to refer to a Ph.D. Candidate as a Ph.D. Student. Everyone distinguished between these two ranks in their email signatures and websites.
Also, the OP doesn't note the type of position that this person is being used as a reference for. IF it is an academic application, I would recommend a more senior reference. If you need a character or skills reference for some sort of non-academic position (where references are only checked as a formality by the HR department), this is less of a problem.
Upvotes: 2
|
2017/03/25
| 495
| 1,949
|
<issue_start>username_0: I would like to organize a work-in-progress workshop for PhD students at my university. How can I make sure that the participants really profit from the workshop and receive valuable feedback?<issue_comment>username_1: The title would likely be "Mr." / "Mrs." / "Ms.". There is no prepended academic title that means "will likely have a doctorate at some point".
In other news, a PhD student is probably not a good reference in the first place. You should look for somebody who has been in the game a bit longer and provide a reference that more plausibly compares you against a wider range of previous students.
Upvotes: 3 <issue_comment>username_2: You can put "ABD" after her name. From [Wikipedia](https://en.wikipedia.org/wiki/All_but_dissertation):
>
> "All but dissertation" (ABD) is a mostly unofficial term identifying a stage in the process of obtaining a research doctorate in the United States and Canada.
>
>
>
You could also say
>
> <NAME>, PhD candidate.
>
>
>
Upvotes: -1 <issue_comment>username_3: Another alternative that sounds slightly more formal and hasn't been mentioned yet is *doctoral student*.
Upvotes: 2 <issue_comment>username_4: I would use <NAME>, Ph.D. Candidate
In my social sciences field (in the USA), Ph.D. Candidate is the accepted title once you defend your dissertation prospectus. Since the process is formal and sometimes arduous, we are very careful not to refer to a Ph.D. Candidate as a Ph.D. Student. Everyone distinguished between these two ranks in their email signatures and websites.
Also, the OP doesn't note the type of position that this person is being used as a reference for. IF it is an academic application, I would recommend a more senior reference. If you need a character or skills reference for some sort of non-academic position (where references are only checked as a formality by the HR department), this is less of a problem.
Upvotes: 2
|
2017/03/26
| 400
| 1,806
|
<issue_start>username_0: Recently my article has been accepted for future issue in a prestigious journal published by <NAME> and sons. After accceptance of article I received an email from an other publisher based in Germany that they are interested to publish my work in a form of a chapter of a book. They also offer to get roylity money once the book start selling.
The copy right statement of the journal, where my article is going to appear, states that the author can publish a pre-print version on public libraries and can self archive on own website. Moreover, it has a 12 months of emborgo period.
In given circumstances can I publish my work as a chapter of book as well?<issue_comment>username_1: I am not a lawyer, but <NAME> owns the copyright of your current article, and if you just copy larger parts of it into the book chapter, the german publisher will need to obtain permission from <NAME> (that would already be the case for individual figures) and this can well involve transfer of money. That they allow you to publish a pre-print or put it on your personal website is a concession made to you as a person, but not to other (commercial) publishers.
To avoid any of that you would need to re-write (and ideally expand) the article significantly.
Upvotes: 3 <issue_comment>username_2: Apart from the legal concerns, publishing the same article twice (even when rewritten) is also obviously considered bad academic practice and self plagiarism, unless the book is conceived as a collection of works published elsewhere. In the latter case however the publisher of the book may legally obtain republication rights from the publisher of the journal. Many publishers have a policy which allows issuing republication rights to third parties for a certain fee.
Upvotes: 1
|
2017/03/26
| 1,684
| 7,467
|
<issue_start>username_0: I work as a programmer as a full time job. Since I'm an experienced programmer and have some teaching aptitude, I've been asked to give a 40-hour course in C# programming to a computer science group of 70 students.
I've given two lessons so far (4 hours total), mixing lecture and practice sessions. I think I'm doing just fine but I'd like to reduce the number of students I'll lose along the way.
I thought I would make each student more engaged if I established a conversation with them, even if that might prove to be time consuming. I already have an ongoing conversation with (roughly) a fourth of them, since they reached out to me to ask questions and be advised on practical matters. Plus, they did the excercises I assigned to them.
How should I act with the rest of the students? Today I thought I would be sending a message to each of them (we're in a Slack team) just to check if they've understood the lessons and be helpful in giving advice in case they were stumped by simple matters they might be too shy/afraid to report.
On the other hand, I understand they are adults and that I should not babysit them. What would you do?
Thank you in advance.<issue_comment>username_1: Establishing cordial relations with students is almost always beneficial in education. By making connections with them you are providing a form OF social support that helps with retention and academic performance.
Your concern with babysitting relates more to.the students expertise in the subject matter rather than their age. Novice students normally require much more support than advanced students regardless of their age.
Upvotes: 2 <issue_comment>username_2: Most instructors don't ask for feedback about the teaching or the material at this stage. However, that doesn't mean you can't or shouldn't, if you wish to.
Think about exactly what you wish to know, and then try to put that into a very short form that you can link to or include in an email or a post in your classroom management software.
For example, you might be interested in asking
* How does this course fit into your academic and career goals?
* How much time did it take you to do Hw # n?
* What is your preferred way of getting help when you have difficulties with understanding some course material or completing some homework (email, course management software, office hours, special appointment)
Upvotes: 1 <issue_comment>username_3: **TL;DR:** Instructors have many responsibilities that they need to balance in order to be successful in their careers, and it is often not feasible to reach out to each student individually. So, rather than reaching out to each individual student, I suggest you make your students aware, collectively, of additional resources, and **really be available** for those individual students who want the additional help. For those lower-performing students that don't want the help or don't seek it out, they will fall behind and will need to do some much needed soul searching about why they are in the course in the first place.
---
To answer your question, we need to answer another question first: "acceptable" with respect to what?
Assuming the answer to the above question is "with respect to my other responsibilities," then I would say that, **no, it is not acceptable** to reach out to every student individually (especially in your case, having 70 students at present).
When instructors need to balance their teaching and other responsibilities (such as research, service, or, in your case, your full-time job), reaching out to all students collectively is the more efficient approach; this is accomplished by making students aware of additional resources that are available (e.g., office hours or additional interaction times, as needed, practice problems, etc.). I list all of the resources available to my students in my syllabus, and I make it a point to reiterate what the available resources are verbally throughout the semester. I also send emails to groups of lower-performing students to suggest that they make some time to come in and talk to me about things that are unclear.
Now, the individual students who really want additional help will seek it out. So, what to do about the students that don't seek out the desperately-needed help? Again, from time to time, I will remind students that help is available, but it is up to them to seek it out.
I suppose your stance on how to handle this scenario may be field dependent. In my field (engineering), I think that engineers-to-be should be "trained" in such a way that encourages them to take responsibility for learning and getting to the bottom of things that are unclear to them. (Perhaps in other fields, such a stance is not as important.)
I have especially noted that, in the follow-on courses, where the topical coverage is more advanced and at a deeper level, students who don't get the memo on how to take their studies seriously and take full responsibility for their learning are really not able to keep up; so much so, in fact, that the course pace is slowed down to the point that I am not able to cover all of the topics that one would nominally expect to cover. This is especially troublesome for the students ***who do*** put their best foot forward and are having to suffer the consequences because some of their classmates cannot keep up.
Thus the free ride of having the instructor do every little thing for the lower-performing students has to end at some point so that only those students who are willing to put in the effort are allowed to advance, and those that don't need to do some self reflection and try again.
Upvotes: 1 <issue_comment>username_4: Having taught online for many years, I can say that you will get a large proportion of students who will not engage with you. Some of these are good students. Some of these are low-performing students. As long as the students know you are available to talk, that's what is important.
For instance, for in-person classes, I have office hours and only a small number of students will visit me during them. However, I regularly announce reminders of my office hours and when particular students are falling behind, I will reach out to them to see if they would like to visit me in my office. I do not schedule an individual meeting with all of the students in my class.
My advice is reach out to students who are falling behind individually and send full-class reminders that you are available to talk/video chat with about their questions or concerns. I actually think that having a quarter of students actively engaged with you without it being required is actually pretty good. Good luck!
Upvotes: 2 <issue_comment>username_5: No. Do some simple math, if you have 70 students and "reaching out" to each of them takes 5 minutes this would take you 5:50, the better part of a work day. Moreover, "reaching out" via e-mail could mean getting replies at different times meaning it would take even more time.
Your intentions are admirable. However not everyone is going to grow up to be a programmer. Set expectations. Be humorous. Some students will find it easy, others will be completely incapable (You are doing them a favor by showing them this is not a career path). Invest your time identifying the ones that are struggling but putting in effort and put your effort there. Differentiate between those, and the students that want to politic for better grades.
Upvotes: 2
|
2017/03/26
| 1,306
| 5,636
|
<issue_start>username_0: As a part of my PhD project, I wrote a paper. When my supervisor read it, she said “it is perfect and complete, go ahead and publish it yourself because I cannot contribute more to it”. She really meant it and tried to do me a favor.
However, I still think it can be more beneficial for my future to have her name on my paper. Should I insist to have her as a coauthor or it might be useful in the future to have all the credits of a good paper?<issue_comment>username_1: You should be a sole author for two reasons. First, your supervisor did not contribute to the paper. Some journals actually make you write out what each author accomplished, so you would be at a loss there anyway. Second, a sole-authored publication will demonstrate that you can work independently, which looks great to faculty search committees! When you are on the market, they want to hire people who will become independent researchers. You are demonstrating that by sole-authorship! If you want to publish with your supervisor because she is well-known in the field, my suggestion is that you talk with her about an additional paper project that both of you can work on.
On a side note, your supervisor might have described your paper as "perfect and complete," though this is only one reviewer's opinion. Though the paper may be at a stage for journal submission, please be prepared that the 2-5 blind reviewers may have different opinions about what is "perfect and complete." You still may get A LOT of revision requests from them. I'm sure you did a great job, I just don't want you to get discouraged if the reviewers end up being more critical.
Best of luck!!!!
Upvotes: 5 <issue_comment>username_2: **TL;DR: Don't list her as an author, because: 1. She isn't one. 2. It's ok for you to be the sole author.**
First it must be said that the question of what benefits you is the minor consideration in listing the authors of a paper. A scientific paper needs to be attributed to the people who performed the research and/or writeup, and that's that. While there are many cases in which it's not clear whether a person should or shouldn't count as an author because the weight of the contribution is debatable, this is not one of them. So even if it were to help you somehow to list your advisor as an author - it would be inappropriate. Unethical even.
Irrespective of that fact, it's perfectly common and acceptable, and often appreciated, for PhD candidates to write papers of their own. So you're not even risking anything.
Upvotes: 4 <issue_comment>username_3: My advice is to ask your supervisor. If she really does not want to go on the paper, ask other colleagues who might have helped you. If she hints otherwise, include her. Being alone on that paper might raise your reputation in a way, but it might also give the impression that you are grandiose and ungrateful and not a teamplayer. And that you don't recognize a situation where you can indebt other people to you at no cost. If in your group, everybody writes their papers with ten co-authors, chances are that being a sole author on a well-published paper will invite bad feelings from your peers.
Upvotes: -1 <issue_comment>username_4: I advise clarifying with the supervisor whether "go ahead and publish it yourself because I cannot contribute more to it" meant that you should submit it with only your name in the author list, or whether they meant that in their opinion it was ready for submission and that they had no suggestions for improvement.
At the University where I completed my PhD, and at others Universities of which I have direct knowledge, it was the norm (i.e. expected) to include supervisors as co-authors independent of direct contribution to a paper.
In the case where you are the (otherwise) sole author then the author list would read < self >, < supervisor1 >, < supervisor2 >, ... . And in the case where you were the lead author and had contributions from others then the author list would read < self >, < co-contributor1 >, < co-contributor2 >, ..., < supervisor1 >, < supervisor2 >... .
Additional commentary:
I have not been able to find anything which states this explicitly. I have since found this, which to me indicates that supervisors should only be listed as coauthors if they directly contribute, contradicting what was communicated verbally to me:
>
> List the authors on the title page by full names whenever possible. Please be absolutely sure you have spelled your coauthors’ names correctly. Be sure also to use the form of the names that your coauthors prefer. ***Include only those who take intellectual responsibility for the work being reported, and exclude those who have been involved only peripherally. The author list should not be used in lieu of an acknowledgments section.***
> ([Additional Information for BSc Honours Dissertation and MSc Research Projects](http://geophysics.curtin.edu.au/local/docs/unit_outlines/AdditionalNotes.pdf "ADDITIONAL INFORMATION for BSc Honours Dissertation and MSc Research Projects"))
>
>
>
On reflection, this suggests that early papers will normally have supervisors as coauthors (assuming intellectual or tangible contributions to the papers) as the PhD candidate develops as a researcher, but that later papers would only list supervisors if the supervisor has clearly contributed in the same way as would be considered for any other person to be listed as a coauthor.
(Independent of this, I have also seen academic environments where it is politically expedient to list supervisors as coauthors - to keep them happy, so to speak...)
Upvotes: 0
|
2017/03/26
| 1,825
| 8,287
|
<issue_start>username_0: For my thesis, I did some programming for a project. My supervisor seemed to be very happy with the results when I was there and promised I was going to be in the publication. Problems started when I graduated and moved abroad. My supervisor said they couldn't replicate my results and started pushing me to continue into the project. I left all my scripts properly organized with README files and was available to answer emails and connect remotely to do minor fixes, but I was already fully dedicated to my masters and didn't have time to do extensive extra programming. Indeed, I was doing field work and didn't have access to computers for most of the time. After almost a month in which I was out of reach, I checked my email and found a communication claiming I have stalled the whole project and lead to its cancellation. I let things pass for almost two years, not having much contact with my then coworkers or supervisor, and just yesterday found the paper was indeed submitted around six months after that final communication and taken into publication! And I'm in the acknowledgements albeit for a menial task that anyone could do in a couple of hours!
I can't be sure if there was misappropriation of my work since the code of the project hasn't been published. They might have rewritten the code, but they might have also indeed used MY code or parts of it. Unfortunately, I don't have a copy of my work, since I was doing everything remotely in the server and I now fear I wouldn't be able to substantiate my allegation to the editors in case they ask for prove to my claims (I know, I should have kept a copy for myself, but I never expected this to happen at all...).
I noticed some red flags before leaving but didn't think they were of real concern. My supervisor seemed to be resented at me leaving instead of staying in his lab. Although my supervisor is very good in his field, I didn't like the overall way in which he treated us, his students -- he is the kind of supervisor that would yell and call you stupid in front of your colleagues –. This is one of the reasons why I decided to leave the lab and even change field. I don't know if this affected his intention to include me as an author, but in any case, I think they should have informed me I was going to be mentioned in the paper, and they didn't.
Is this among the normal things that happened in academia? Should I tell the editors either they include me as author or they drop any mention of me in the acknowledgements? Also, among the authors are other students for whom I have real appreciation and I don't want to spoil their names in their first publication. Should I let things just pass and be forgotten? (The paper has been published already for more than a year and haven't been cited once. -In my opinion it's a good paper, but not a breakthrough). What would be the normal thing to do in academia?<issue_comment>username_1: Unless you outright had years of valuable work stolen out from under you that imperils your future career in your field, and you have strong proof of what occurred that would establish to an outsider that clearly something very seriously wrong was committed (way, way past a snub), I'd "count your blessings". From the sounds of it, you have made a clean break from the field, the lab, and most (or all) of the people. There's no huge ethical issues with the paper itself (no fraud, abuse, etc), so you don't have to worry about guilt by association.
You don't need letters from them, you don't need to collaborate in the future, you don't need to even see them passing down the hall. It really sounds like an amazingly good outcome, with the worst-case scenario being you didn't get co-authorship on a paper no one particularly cares that much about. And demanding to be removed from the acknowledgements would make you look like a bit of weirdo, from an outsiders perspective - "give me authorship or don't acknowledge me at all", honestly, will not give anyone a good impression of you to anyone. No upside to be had.
As of right now, from an outsiders perspective, it sounds like did reasonable work as an undergrad, and moved on. You were a bit unresponsive (not checking email for a month tends to be uncommon), but quite understandable given the big changes - though it would not have been cool to be aware of pending collaboration and not even tell your collaborators you are not going to be "online" or reachable for weeks on end. People in the lab may have been disappointed that you weren't more interested in continuing the project to publication, so it sounds like there was some miscommunication, misaligned expectations, and you weren't all on the same page as to where you stood on the project now that you graduated and moved on to something not strictly related. I don't know what conversations you had together, so we can't determine if someone was just being demanding or if you had indicated interest in continual involvement but then effectively didn't follow through. Not our business, but something you might consider for yourself.
Given the lack of major downside, it would be a great opportunity to learn how you might avoid a situation like this in the future, because you'll of course eventually move on from your current position and most people don't perfectly have all work tidied up and permanently finished when they leave. Reflect on how to set clear boundaries, avoid making promises (expressed or implied) you aren't dedicated to following through on, set clear limits to what you are agreeing to, communicate your availability to collaborators, have direct conversations with people to ensure everyone has matching expectations, etc. These skills will be invaluable to your future career, so no time like the present to improve on them, learn from past experience, and build up to a more happy and productive future.
Upvotes: 3 <issue_comment>username_2: As a BSc student in academic environment, *you* are the one who is expected to learn and benefit by doing your BSc project (thesis). Academic research is not only about writing the code; in fact, mathematical ideas, numerical results and impact on areas of applications play much more important role. Consequently, the code is usually just a means of achieving the results, but not the main result of the project itself. It is particularly so for partially completed programming projects.
You mentioned, that despite your code contains the README file, your colleagues were not able to use it to reproduce the results and have to abandon all (or most) of it. People would not do this just to annoy or humiliate a colleague who decided to move on to another place. We have to assume that the main reason for them to drop your code was that they were genuinely unable to use the code in your absence, even with a sporadic support over email. This could happen for the following reasons: maybe the code is simply not good enough, or maybe it is actually a nice implementation of the algorithm which is not able to solve the problem at hand (i.e. the problem is not your coding, but the math ideas behind the method).
Despite the limitations the code/method you developed, the process itself definitely provided you with a lot of experience to reflect on and to learn from. You have successfully completed your BSc project and graduated. Congratulations on this achievement!
However, as the publication is concerned, it does not seem to me that your contribution is sufficient to deserve an authorship. You have only contributed to programming aspect, not to development of math ideas and interpretation of results. The code you developed was not used to produce any results included in the paper. You have not actually contributed to writing the paper. If you believe that some of the results included in the paper are yours, you could ask to be included as a co-author. But if you merely suspect that some part of the code you've developed could've been reused in a new project to produce the results, it is not a valid reason to include you as a co-author. A lot of academic research is based on using someone else's code (open source code produced by community / colleagues) which deserves acknowledgement but not co-authorship.
Upvotes: 2
|
2017/03/26
| 2,291
| 9,670
|
<issue_start>username_0: I realize there is a somewhat similar question posted, but my question is different in that I found a pretty big mistake in my thesis. It's such a big problem, it changes my results. I almost wish I hadn't noticed, and been so careful to go back and check every thing, because I really don't know what to do about it except fix the errors/interpretation errors and discussion (today). I defend very soon (in days). It's possible that my committee won't notice it, but I feel like the guilt might drive me insane. I'm worried that my chair noticed it in the past, but they didn't correct me because they feel I'm incompetent (which I feel in ways I am when it comes to stats). Should I re-write it and let my chair know and ask what to do? I need to defend soon in any case. I wonder if I could just present as though I'd already corrected it and e-mail the revised manuscript beforehand. I'm just not sure how they would perceive someone who overlooked such a obvious mistake. Any advice would be appreciated.<issue_comment>username_1: Get off of Stack Exchange and contact your advisor, right now. He/she is the most qualified person to help you understand what is going on and what to do about it.
It's possible you might have to delay your defense in order to fix it. That would be unfortunate, but not the end of the world.
On the other hand, if you know about a serious error and defend anyway in the hopes that your committee doesn't notice, that is deeply unethical. We are talking "kick you out of grad school", "revoke your degree years later" unethical. That is not an option. Forget about it.
It seems to me unlikely that your committee knows about the error but is intentionally ignoring it in order to "trap" you. That would be very inappropriate behavior on their part, and I've never heard of it happening. There would be nothing for them to gain by doing so. And even if this were the case, pretending you don't know about the error would only make things worse.
Upvotes: 7 <issue_comment>username_2: The thesis and thesis defense is less about having the results you wanted to have, and more about demonstrating that you know how to do good quality research and can work on that somewhat independently. It's about figuring out what questions to ask and what methods can be used to find the answers, and then applying those methods to come up with answers. What the answers actually are is not as important [for the purposes of passing a thesis] as the process you took to get those answers. Your discovering this issue and taking prompt action to fix it shows attention to important details and integrity in the knowledge-discovery process. Sure, it would've been better if you'd caught that earlier, but you've caught it now, before your defense, and you're rewriting the discussion and conclusions to reflect your best analysis of the data.
In my opinion, your having found this and your efforts now to promptly fix it say more [positively] about what a thesis is supposed to evaluate than most completed theses.
Don't panic. Talk to your supervisor and committee. Tell them what you found. Revise your document to reflect the new understanding. Maybe you'll have to delay the defense a bit, but more likely you'll present at the same time and talk about what you found; the committee might require you to deliver a revised document [some weeks after the presentation] reflecting that before they sign off. That might take you some time to do but it should be OK, and will leave you with work you can feel is more solid.
Kudos to you for finding the issue and having the integrity to stand up for it. This should help you in the long run and the core evaluation at issue here, at the cost of some extra work to revise and maybe some scrambling to re-practice your revised presentation.
*Edit: Congratulations on passing!*
Upvotes: 8 [selected_answer]<issue_comment>username_3: This happened to me. I found an error in a complicated mathematical proof in the appendix to one of my papers. As I flew home to defend it I was fixing the mistake on the plane. In the defence, I told them about the error. They told me to fix it and awarded me the degree. This was a formal political theory paper; I doubt if they even read the appendix, but I'm glad I told them.
Unless this mistake blows away your results, it's best to be honest. (Actually, it's best to be honest in any case. There are lots of bullshitter academics out there; don't be one of them.)
Upvotes: 5 <issue_comment>username_4: If you could fix the thesis in under a day, it wasn't a major error. In the words of <NAME> "When in doubt, tell the truth. It will amaze your friends and confound your enemies." Just be glad for word processors. Back when I was in college, I had to type out my papers by hand on a typewriter, and created the graphs using rulers, triangles, and Rapidograph pens.
Upvotes: 4 <issue_comment>username_5: I'll add a suggestion to the excellent answers provided already:
When we find an error in our proof or even our claims - especially in something as significant in our lives as a these (even if it's an M.Sc. thesis) - we tend to believe that everything is ruined and the research is useless. It's really not. Even if you can't fix it in a day. I would go into details regarding why that is, but that doesn't matter now.
The important thing to remember is: **Don't make the reporting of the mistake the focus of your thesis presentation.** You should definitely be fair and open: When you get to the part which directly relies on the error, tell the committee that as you were preparing for your defense, you found a mistake in that claim / in the proof of that claim. **Don't start going on and on about the mistake and how you made it and how it invalidates everything**; make your presentation like you would if you hadn't found your mistake, and when you get to where the mistake actually happens, that's when you say what the mistake was instead of presenting the erroneous argument. Let the committee decide if they want to focus on it or whether they would rather hear the rest.
Upvotes: 4 <issue_comment>username_6: <NAME>' famous proof on Elliptical Curves also proves Fermat's last theorem 356 years after Fermat proposed it. The 1993 proof contained an error that took Wiles over a year to fix with the help of his assistant <NAME>. Even the greats make mistakes or overlook things. This is why papers are peer reviewed in the first place. Even with the mistake, the original proof was valuable as it showed innovative approaches to the problem.
Upvotes: 4 <issue_comment>username_7: First, Speak to your advisor immediately. That person is in the best position to guide you. Second, realize that the most valuable stock-in-trade in academic life along with competency is integrity. Research mistakes WILL happen, that's a fact of life and the imperfect world we live in. [<NAME> did not get General Relativity right the first time he published it and save for WWI, experimental data would have disproved the incorrect version of GR.]
Needing or waiting for someone else to point your error out (especially since you discovered it already) speaks poorly of both one's competence AND integrity and surely you don't want that outcome.
If it's too late to amend the thesis prior to its defense, it's better that they learn from you of the error than having someone else point it out. That would demonstrate both your competence AND integrity.
Upvotes: 2 <issue_comment>username_8: A good academic committee will recognise the positive significance of a candidate who proactively checks their work *and takes errors seriously*. They would see it as a plus for you, not a minus, whatever the effect on your thesis, because the thesis itself is a tool to assess you as a researcher in the field, and this action will indeed reflect well on you. (And you won't have to worry about someone else noticing it in future!)
Write them a formal note explaining what you found, and your initial assessment of its impact - extra time, any changes to the paper, etc - and tell them what you'd like (an extra week or month to rewrite that section, or update it to fix the issue, or to consider if any other part is affected and needs changing as a result).
Be matter-of-fact and cordial - and have a verbal chat with your supervisor in which you show him/her the note you have drafted and check it looks OK to them. That also gives your supervisor a heads up so they don't look foolish or caught by surprise, and a chance to give any other advice.
Upvotes: 2 <issue_comment>username_9: In independent study as an undergrad I reviewed draft MS and PHD papers for Math errors. Every paper had some error or errors, some were minor, and some were major. It was never a big deal unless the grad student got defensive. Accepting the criticism and moving on was part of the process. The students who positively engaged with my adviser to understand the flaws and gain a deeper understanding were really appreciated by their adviser and the committee.
Upvotes: 2 <issue_comment>username_10: Are you (1) a life scientist and (2) worried about the stats?
I worked as a statistical advisor for several high-profile journals in the life sciences. Your hunch that your stats is wrong is probably right. Your hunch that the examiners may not have noticed is probably also... well, I should not say too much.
Be frank about your misgivings. If you are able to make coherent arguments during your defence, that will in itself be a positive thing.
And learn stats properly, for crying out loud.
Upvotes: 0
|
2017/03/27
| 837
| 3,746
|
<issue_start>username_0: I'm going to apply for a Ph.D. position in a nordic country, and as you know, I need my current supervisor recommendation letter.
I asked my supervisor to provide me with a recommendation letter regarding all the researches I have done in his lab. He responds that even though he is more than happy to provide one, but he never provides any recommendation letter directly to a student. Instead, he will only send such a letter to any professor/university who formally asks about one.
The whole process looks problematic to me for many reasons:
1. I can't ask my new supervisor to ask my old supervisor to *ask for* a recommendation for me. No supervisor would do such a thing because there is really tough competition between candidates. Most of them already have strong recommendation letters. The only scenario I can think of is when the university itself sends such an email, where there is no such procedure in the case of at least Sweden.
2. I have to apply via a web form inside the university's website where the application form asks for at least 2 recommendation letters (in PDF format). There is no place in that form for me to add my professor's email.
3. I didn't see such a process anywhere in at least nordic countries. Almost all my friends received their recommendation letters from their previous professors.
What's the best approach in such a situation?
**Edit**
I'm going to apply for a position in Sweden.<issue_comment>username_1: Having the letter of recommendation (LoR) writer send the LoR directly to the institution to which you are applying is indeed the standard procedure in the English-speaking world and increasingly beyond. In this procedure, the LoR is confidential; only the issuer and the recipient(s) should normally see it.
The process works as follows:
* Online forms usually have two "sides". The first side is for you, the
applicant. You enter the name and email address of your LoR writer,
after you have obtained their approval. An email will be sent to the
LoR writer with a link to "their" side of the website, to which they
can then upload the letter and perhaps additional information.
* For the offline process, you write your application, and the LoR writer sends in their LoR independently from you before the deadline.
However, in some countries (until recently e.g. in Germany and Austria) a different procedure is used, in which the LoR is handed to the applicant who then submits it together with their application (via website or offline).
I would contact the HR or admissions (whichever is applicable) department at the institution where you are applying and ask for clarification, then talk to your LoR writer again. If the institution explicitly requests that *you* hand in the letter, you can just relay this information to your supervisor. You may also want to make your supervisor aware of different traditions of LoR writing in different countries. Alternatively, ask the HR/admissions department whether they can accept a LoR directly from your supervisor.
Upvotes: 4 [selected_answer]<issue_comment>username_2: While uncommon, some applications do ask the applicant to submit the letter directly - as a PDF. This is very rare, and I have had many faculty simply refuse to submit the letter that way. I have also received notably redacted letters this way. Neither is good for the applicant. What I tend to do is upload a PDF with the writer's contact information and a note that the letter will be sent directly to the institution (usually the dept. secretary, the dept. chair, or the advisor). I would then follow up within a day or so after the writer confirms that the letter was sent to make sure the letter got where it needed to be
Upvotes: 0
|
2017/03/27
| 408
| 1,785
|
<issue_start>username_0: I am an early stage researcher (just progressed into my first postdoc) and I have made some effort to keep my online presence (e.g. Google Scholar) correct and up to date. I have recently noticed that an author with a highly similar name (identical first and last name) has my publications listed in his google scholar page as well as his own.
I have tried contacting the author directly as I have heard that someone that was interested in my work actually sent an e-mail to the other authors. However, the other author has not responded.
Should I contact google directly or am I overthinking this whole situation and should I just ignore it?<issue_comment>username_1: I suggest you create an ORCID, as it is meant exactly to solve homonymy.
Upvotes: 2 <issue_comment>username_2: How strange of a problem! I never run into people with the same last name as mine, so I haven't run into this before.
First, if Google Scholar is no help with removing the citation from the other author's list, does this mean that you cannot link the paper to your own list? Link it to your own page and when others cannot get a hold of that author for questions about your paper, they may try you.
Second, do you have a middle name? You may distinguish yourself between you and the other author by consistently using a middle initial.
Third, if you have your contact information on the articles in question, I would trust that a good scholar would look up the Corresponding Author information on the article itself.
Google scholar is just google scholar. This must be frustrating for you, but remember that when you apply for academic jobs or tenure, this doppleganger will not be able to take credit for your work. It is yours and will reflect your effort.
Upvotes: 0
|
2017/03/27
| 579
| 2,435
|
<issue_start>username_0: I have applied to a number of Ph.D. programs in mathematics, keeping in mind up until this point that the generally accepted wisdom is: "If you aren't fully funded, don't go."
Now, luckily, I have received full funding (TA position and full tuition waiver) from one university, and I still have five more places I'm waiting to hear back from, but the one funding offer I've gotten so far is from one of my safety schools.
I got accepted into another university that I would much better prefer attending, but without much of a chance of a full funding offer.
Now, I have already taken out significant (~40,000$) student loans for my undergraduate degree, and by no means plan on significantly increasing that to get my Ph.D, but I came to thinking: "What if I got accepted into a really good school, got decent, but not 100% funding, and decided to take out a small loan to attend?" and so I decided I should fill out FAFSA "just in case".
However, now that I am filling out the form, given that it requires you give a list of universities to send your information to before you receive information on how much funding you are eligible for, I wanted to know whether or not supplying this information to the schools I haven't heard back from decreases my chances of getting a tuition waiver or TA position from them, since they see that I already have funding? (which of course I don't want)
So is this a legitimate concern? Or should I go ahead and fill out the FAFSA?<issue_comment>username_1: You should not worry about this. First of all you have already applied and usually you are not supposed to add any additional documents to your application. Secondly these information are separate so I doubt one will influence the other. And I wouldn't want to go to a program where they make you pay more to save money.
Upvotes: 3 [selected_answer]<issue_comment>username_2: Let's imagine you attend a program that has offered full funding. They may not require you to fill out the FAFSA, but if there is any chance you'd be eligible for any external funding of any kind, that would require you to fill out some form or other, you can depend upon it, the department will ask you to fill out the appropriate form.
If you have not been asked to fill out the FAFSA by a particular school, you may omit that school's name from the list. You can always have a copy sent to an additional school later.
Upvotes: 0
|
2017/03/27
| 2,798
| 10,406
|
<issue_start>username_0: I’ve frequently heard people claim that individuals who hold PhDs are not “real” doctors. These people assert that only physicians can rightfully claim this title, and that it’s inappropriate for PhD-holders to use this term.
For some reason, many also think that the MD is much more difficult to attain than a PhD for example in computer science.
**So - should Ph.D.s Be Referred To As ‘Doctor?**
Ps: currently i am a PhD student and don't know why the question is being devoted!<issue_comment>username_1: In the modern USA the title of doctor is *valid* for both medical doctors and holders of PhDs in the US, but *particular customs* may vary by institution. The general rule of thumb for etiquette is to refer to someone however they wish to be referred to. If you have a PhD that insists they be referred to as doctor it would be very impolite to not do so. Likewise if you have an MD who insists that you do not use their title it would be similarly impolite.
In situations where it is important to avoid confusion it is common to spell it out explicitly. Rather than using the honorific use the explicit degree, for example it is very common for email signatures to look like:
<NAME>, Ph.D. in Computer Science
instead of
Dr. <NAME>
Similarly, an MD would tend to say:
<NAME>, MD, Cardiologist
or even
<NAME>, MD, Ph.D., Cardiology
I suspect that your question has another component, which is essentially whether or not it is "fair" for a Ph.D. holder to refer to themselves as doctor. This requires an assumption that the MD is more challenging to attain than a Ph.D., and that calling oneself a doctor is somehow illegitimately taking the status of a medical doctor. Let me just say that the people who have earned these degrees are generally less concerned about this than those who have not, and that the title someone puts after their name doesn't tell you very much about their individual ability, dedication, or experience.
Upvotes: 5 <issue_comment>username_2: One of the original meanings for the word "doctor" is teacher or scholar. It literally is derived from the Latin verb docēre which means to teach. As such, a medical doctor is literally a teacher or scholar of medicine. A Computer Science doctor is a teacher or scholar of computer science. The title "Dr" is just a recognition of level of knowledge that a person has obtained in a giving field through recognized academic challenges.
Upvotes: 4 <issue_comment>username_3: I've encountered this argument before.
Remember, It's not as if the term 'doctor' is protected. Two cases in point:
1. A two year law degree is called a 'juris doctor'. Newly minted JDs will be quick to remind you that they, too, are doctors.
2. In parts of the U.K., calling a surgeon a 'doctor' is an insult, as historically the surgeons were barbers (who'd 'doctor you up'), because barbers had the sharp tools necessary for surgery. Many U.K. Surgeons go by 'Mr.'
My advice - relying on titles is pointless. Use your intellectual prowess to impress.
If all else fails, insist you go by 'Professor'....or if in or from Germany, '<NAME>'.
Upvotes: -1 <issue_comment>username_4: Only Ph.D holders must be referred to as doctors. Physicians have only bachelors degrees, although the medical degree was divided into two stages in the U.S universities, still, Physicians only have bachelors degree. Doctors are researchers who have finished their dissertations and became scholars in their fields. Physicians don’t write a dissertation and all what they do is to “treat” people from illness, not to “teach” students in universities.
Note: The word “Doctor” is a latin word that means “I teach”, and it has nothing to do with treatment or medicine.
Upvotes: -1 <issue_comment>username_5: I would like to refer to a dictionary to answer <https://en.wiktionary.org/wiki/doctor>
The word doctor (in English) can refer to
>
> 1. A physician; a member of the medical profession; one who is trained and licensed to heal the sick or injured. The final
> examination and qualification may award a doctor degree in which case
> the post-nominal letters are D.O., DPM, M.D., DMD, DDS, DPT, DC,
> Pharm.D., in the US or MBBS in the UK. quotations ▼ If you still feel
> unwell tomorrow, see your doctor.
> 2. A person who has attained a doctorate, such as a Ph.D. or Th.D. or one of many other terminal degrees conferred by a college or
> university.
>
>
>
Outside of academic circles, the former is the commonly used definition, so without context, "doctor" will be understood as "physician". And thus a PhD who isn't a physician appears to be a "doctor (PhD) who isn't a doctor (physician)" and this contradiction is commonly refered to as "not a real doctor" or "not that kind of doctor".
So I would say referring to a PhD as doctor is technically correct (and might be unambiguous with some context as in "doctor in computer science") but without context you do risk being misunderstood.
For languages other than English I don't have a good overview, but the same overload of meanings occurs e.g. in German ("<NAME>" is probably a male physician) while in Italian it is common to refer to your self as "dottore" after the master already (and then afaik the upper case / lower case spelling disambiguates the master from the PhD).
Upvotes: 1 <issue_comment>username_6: In France the situation is somewhat complex. The overall answer is "yes". But hear me out.
---
Let me first spell out the theory. It is important to make the distinction between the *diploma*, the *degree*, and the *title*.
* At the end of a "*doctorat*" (PhD), you are awarded a PhD *diploma*, which confers you the university *degree* of doctor. For this you must write a *research thesis*. This is the fourth and highest university degree. (The other three degrees are, in order, *baccalauréat* = high school degree, *licence* = bachelor, and *master*, none of which grant a title).
* At the end of studies of medicine, you are awarded a State *diploma* of "doctor of medicine" (MD). However, this diploma does not confer the university *degree* of doctor. To obtain the diploma, you must write a "practice thesis" (*thèse d'exercice*), which is not at all like a PhD thesis (no requirement of originality, lasts a much smaller time – writing a bibliographical survey is sufficient to obtain it for example). This means that someone who "only" has a diploma of doctor must do an actual PhD in medicine before teaching in university, or doing medical research, and write an actual research thesis. (Hence some people are "double doctors", a title I just made up.)
On a PhD diploma it is explicitly written "The national diploma of doctor is awarded to XXX *and confers the degree of doctor*, to enjoy the associated rights and prerogatives". The part in italics is not written on diplomas for medical doctors.
Both diplomas give you the *title* of "doctor". By law, only these diplomas give you the right of using this title. So yes, certainly, a PhD holder has the right to be called "*docteur*". MD too. But no one else.
In fact, there is a famous story here. Someone got a "*chargé de recherche*" ("scientist") position at CNRS. This is somewhat prestigious in French academia, and very competitive. It is essentially a rank of "research-only associate professor". Then he wrote an article in a magazine, signing his name "Docteur XXX". A regional journal called him out on him, saying he was not a real doctor, but only a "mere scientist" (an inane statement once you know that a PhD is required to get this "scientist" position\*). This eventually went to the approximate equivalent of the Supreme Court (*Cour de cassation*), and the regional journal was condemned for defamation of character in 2009. [You can read more about it here (in French).](https://www.hervecausse.info/Le-titre-de-docteur-n-appartient-pas-aux-medecins--N-est-pas-docteur-qui-veut-et-qui-l-est-a-son-honneur--Mise-au_a249.html) In 2013, the law was changed to explicitly state that PhD holders have the right to call themselves and be called "doctor" in professional settings.
So unless you want to get sued and lose (and we don't do plea deals here), you better call PhD holders "doctor" if they ask for it in France.
---
Now there is the practice. As you know, in theory, practice and theory are the same, but in practice, they differ :)
In ordinary situations, *only medical doctors are called "docteur"*. It is extremely rare for PhD holders to actually use the title, and then, only in writing (usually in very formal documents). I cannot recall ever hearing someone call a PhD holder "docteur", while I have heard it numerous times for medical doctors. I have a PhD since a few months ago, and only foreigners have called me "doctor". On doors, on faculty directories, on websites... nobody ever write "Dr X". It just doesn't happen.
So it is extremely unlikely that someone would insist that you call them "docteur" if they are not a medical doctor. (In fact even for a medical doctor it would be in bad taste for them to ask... anyway.) But if they do ask, you should oblige.
---
\* Honesty makes me want to amend this a little. The French name for the position, "*chargé de recherches*", literally means "someone who has been tasked with research". It sounds a bit bad, because it makes it sound like the person in question is a mere subordinate who does as they are told and nothing else. As I said, it's actually a permanent, research-only position, and a very competitive one at that. It's the same kind of deal as "assistant professor", who are not the assistant of anyone nowadays but still have this somewhat bad-sounding title. (In the private sector, someone with the level of responsibility of an assistant professor would certainly have a grandiose title like "Team manager"... but I digress.)
Upvotes: 3 <issue_comment>username_7: For Germany the situation should be as follows, IANAL.
If you have your PhD degree from any university as listed in the Carnegie list (find the list here: <https://carnegieclassifications.iu.edu/>), then generally you can use the `Dr.` prefix instead of the PhD abbreviation.
(See FAQ item #18 here: <https://www.berlin.de/sen/wissenschaft/studium/abschluesse-und-titelfuehrung/haeufige-fragen/>)
This should generalize in my opinion to the whole country.
Upvotes: 2
|
2017/03/27
| 1,046
| 4,563
|
<issue_start>username_0: I have been contacted out of the blue by a seemingly highly motivated high school student who has requested the opportunity to do a research project under my supervision (in the field of stellar astrophysics). I am not opposed to supervising such a young student, but I fear that they will lack all of the prerequisites to even get started on any kind of project.
How should I approach this?<issue_comment>username_1: Give them minor take-home assignment, nothing you wouldn't give as a course project. Let them solve it on their own, one of the two things would happen:
1.) They see what the subject is and study a few years more before approaching you
2.) You just got proof that :
```
a.) This person is driven enough to cover something for a minor problem at least
b.) This person can be taught something for doing the work
```
Upvotes: 1 <issue_comment>username_2: I supervise high school research students every summer. It is very rewarding. But, it *is* a lot of work, so keep that in mind when deciding whether this is something you are interested in.
Besides for the obvious issues (lacking background knowledge, field-specific skills, and other preparation), I find that high school students tend to need more emotional support from me than other students I work with, so that is something to consider as well.
I interview the students and ask them questions to assess their personalities. I look for the following qualities specifically in would-be high school researchers:
* Good attitude. Younger students can be a little bit immature, and this often manifests as a bad attitude. Working with a high school student who does not have a good attitude, does not take criticism well, and is just generally unpleasant to deal with, is just not worth it.
* Willingness to fail. The high school students who do research are used to being very successful at everything they do. Sometimes, research can be paralyzing to them because they aren't going to be successful at it immediately (or even after lots of work).
* Works hard. (Obvious prerequisite. Without hard work, they will not be successful and working with them will be a waste of my time.)
The high school students I supervise come to me through a K-12 outreach program run by my university. (When I am contacted by a student "out of the blue", I tell them to apply to that program.) As part of that, they commit to five weeks of full-time research in my lab in the summer. After this, some continue to work with me after school during the academic year. I would not agree to work with a high school student who wasn't prepared to make a specific commitment and identify in advance exactly when and for how long (at a minimum) they'll work on the project, because of the high probability that it would end up being a waste of my time.
Designing projects for high school students is actually similar to designing projects for undergrads, because the very strong high school students have a level of preparation that is similar to a typical undergrad sophomore or junior. The basic characteristics of such a project are:
* It should be appealing to the student. (I want to encourage them to get into science, not scare them off by giving them a niche project with very little general appeal.)
* It should be something that I am 100% confident I could do myself, if I had to. I should be able to plan out (roughly) in advance what they will do each week that they are with me, and even prepare general instructions for every week in advance (so that they are never waiting uselessly for me to tell them what to do next).
* They should be able to start doing *some* useful work after only a few weeks of topic-specific preparation. I give them very focused homework - lecture videos to watch, things to read and answer questions about, tutorials to go through - and expect them to do that mostly independently before they start.
* It should be a project where they will be able to make *some* progress after about five weeks of work.
* The project should have potential to turn into something bigger, in case the student turns out to be amazing!
I also have a bunch of material about doing research in my field, and about expectations I have of students I supervise. I've prepared these for my undergrad students, so I already have them on hand for the high school students. It is very helpful for them to have these as reference.
P.S. I have co-authored two papers with my high school students. Don't underestimate them :)
Upvotes: 4 [selected_answer]
|
2017/03/27
| 836
| 3,506
|
<issue_start>username_0: I am currently in my final semester of masters. I have some background in Algebraic Number Theory. I have one PhD offer from a US university.
When I started getting into algebraic number theory, I did not have much trouble as most of the pre-requisites were from Galois Theory and Commutative Algebra. But through my study in the last two-three semesters, I have realized that Number Theory is a subject which requires knowledge of many different areas like Complex Analysis, Algebraic Geometry, Topology(subjects which I have not learned properly or not learned at all). This has put me under a quite a lot of stress: Am I good enough to study Number Theory or should I shift to something else, something more self-sufficient?
Is it possible (and advisable) to change my research interest to something else? What should I do?
Edit- To add to my troubles, I don't know anything about Modular Forms or Automorphic Forms or Galois Representations.
Before voting to close this question down, I would like you to know this: I originally asked this question on Math SE. But people told me that it was more of an Academia SE question.<issue_comment>username_1: You haven't mentioned any alternatives to Number Theory. Hmm. Maybe it would be reassuring for you to have a small list in your hip pocket of possible areas.
If you don't have to decide right away, then I would suggest that you hold off on deciding, and meantime start remedying these gaps as soon as possible. Without panic, but without any shilly-shallying. Obviously, don't work on them all at once, but do start somewhere, and make a tentative map through this subject matter -- in what order, what books will you use, what topics could be skipped or skimmed over, which ones will you do in a class, which as self-study, and which in an independent study.
Even if you end up working on something other than Number Theory, this study will be helpful for you in the long run, either directly or indirectly.
If it feels daunting, think *one step at a time*, or *Rome wasn't built in a day*, or some similar phrase that helps you.
In the U.S., one is generally not expected to hit the ground already doing research, right from the first semester. It's okay to take some time and get the solid foundation one needs.
Do you have an advisor, potential advisor, temporary advisor, or mentor of some sort? If not, look around for one, and if necessary ask the department to assign you one temporarily.
Upvotes: 1 <issue_comment>username_2: If you are an intellectually-oriented person, I would advise you to first and foremost follow your own genuine interests.
Another relevant concern is career planning: what do you want to work on the rest of your life? Your education should enable your desired career.
There is also the trade-off between money (wage), type of work, and general lifestyle.
In fact, there are so many concerns to consider that you are likely to change plans as you go. Therefore, aparante's suggestion of taking it one step at a time is an excellent one.
If you have a strong and genuine interest in the program of study that have been offered to you, I'd suggest you try it out. Don't underestimate yourself or overestimate others. Just have a go and see what comes out of it. That way you'll have no regrets later.
However, if this offer is more of a coincidence, or something someone else has pushed on you, examine your own interest in it carefully and let that be your guide.
Upvotes: 0
|
2017/03/27
| 1,055
| 4,059
|
<issue_start>username_0: One of the potential upsides of open access papers is that they can be accessed by anyone for free, including people who don't live or work in a university. As a result, one might think that non-"professional" researchers may access open access papers more than paywalled papers.
Is there any research/study/survey that tried to quantify to what extent open access papers are read by a larger readership than paywalled papers?<issue_comment>username_1: This answers is not exactly what you ask for — you ask for readership, but most of the research has focused on citations. The two are, of course, related, and the answer seems to be a pretty clear yes. (In my field, it's practically the only way I find stuff from overseas, as many online repositories don't seem to have much international coverage except for a handful of big general journals but not the niche ones).
* Gargouri, Yassine. “Self-Selected or Mandated, Open Access Increases Citation Impact for Higher Quality Research.” *Public Library of Science ONE* 5.10 (2010): n.p. Web.
This article indicates that citations are more common for open access articles, and that this is a result of higher availability and access (=likely readership) by researchers. [[Link](http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0013636)]
* <NAME>. “Free online availability substantially increases a paper’s impact.” *Nature* 411 (2001): 521. Print.
This comes to the same conclusion — way back in 2001. The effect is found to be quite massive: “Averaging the percentage increase across 1,494 venues containing at least five offline and five online articles results in an average of 336% (median 158%) more citations to online computer-science articles compared with offline articles published in the same venue.” [[Link](http://www.nature.com/nature/journal/v411/n6837/full/411521a0.html)]
Upvotes: 5 <issue_comment>username_2: Paywalls are very porous. Depending on your field, you might find many paywalled papers available for free through Google Scholar. I know there are major universities, at least in Europe, that encourage their staff and students to use proxies to log into libraries like ScienceDirect for free.
In my experience I googled up shared PDFs of thousands of paywalled papers in the field of economics. I only encountered a handful of papers that weren't available for free somewhere online so I contacted the authors and they sent me copies of their papers for free.
I'm not aware of any data out there that would suggest that open access papers have a larger audience nowadays.
My intuition is that top paywalled journals have more readers than top open access journals, but average paywalled journals have lesser readership that average open access journals. Mediocre paywalled papers don't seem to generate enough interest for someone to share a copy online so they are the ones that end up truly paywalled.
Sorry for the lack of data references. I just wanted to point out that any data on this question should take into account the field, the rating of the journals, and readers who google up paywalled papers for free.
Upvotes: 3 <issue_comment>username_3: The answer is 'yes'.
[Relevant section on Wikipedia](https://en.wikipedia.org/wiki/Open_access#Readership)
>
> OA articles are generally viewed online and downloaded more often than paywalled articles and that readership continues for longer. Readership is especially higher in demographics that typically lack access to subscription journals (in addition to the general population, this includes many medical practitioners, patient groups, policymakers, non-profit sector workers, industry researchers, and independent researchers). OA articles are more read on publication management programs such as Mendeley.
>
>
>
Several sources are cited; I'll just link the [first](https://doi.org/10.1096%2Ffj.11-183988) [two](https://link.springer.com/article/10.1007/s11192-015-1547-0).
You can find more effects of OA publishing in that article.
Upvotes: 1
|
2017/03/28
| 1,891
| 8,143
|
<issue_start>username_0: A few days ago there was an article on the LSE blog about scientific reproducibility which made little sense to me until I realised they were equating 'low reproducibility' with 'scientific fraud':
<http://blogs.lse.ac.uk/impactofsocialsciences/2016/07/21/could-blockchain-provide-the-technical-fix-to-solve-sciences-reproducibility-crisis/>
I've tended to assume that most reproducibility issues are poor reporting of experiments, poor recording of external factors, poor statistical analysis inflating p-values etc but I realise that I don't have much evidence to back that up apart from personal experience. Are there any studies that report the relative proportion of irreproducible experimental results resulting from fraud versus poor experimental/statistical/reporting standards?<issue_comment>username_1: I consider advanced science as it's done today (a few separate teams tackling a particular issue with often very expensive equipment and/or materials) having great prospect for fraud by its very Nature. Consider, for example, science at the times of Copernicus and Galileo and science today. One can tell you it wasn't much of a cost to build a primitive telescope or laboratory then and anyone with more means could be scientist with a relatively low cost. Can you say the same for science today? How many people can spend the cash to build and **equip** (the costly part here) modern laboratory in anything compared to the same costs a century or two ago (or even a few decades ago)? (Exclude the millionaires-also consider some legal issues coming with modern and advanced laboratories.)
The more expensive and more profiled research gets the less people are actually available there to tell you the intentional fraud from the real sloppiness. As the research gets ever more profiled and more expensive to reproduce even the people who can argue the validness of a certain result are getting fewer and fewer, so how can anybody *judge* was it intentional or not? It's next to impossible. And it will probably get worse with time.
So, the study you suggest amounts to more than impossible. High research costs, lack of enough specialized personnel, greater abundance of claims to be falsified and not the least-doubtful interest in verifying them all amount to its impossibility with the advance of research efforts and costs. The more science is moving into an ever more profiled and costly activity, the more the very idea of falsifying every single claim someone makes into a paper is becoming implausible. And if you can't even spare the resources to clarify all mistakes how can you determine are they intentional or not? There may be some cases here and there where enough resources may be spent to clarify certain issues but you can't make a ***serious*** study out of those, right? Consider the amount of cases you are actually studying to the amount of *possible* cases of fraud. How can you get any statistically meaningful results if you are just *picking up* cases where you can determine the fraud was a real issue versus cases there *may* have been fraud? What standards will you use to discern this group from the group where you put errors made by sloppy research methods? What about a control of "impeccable research" (Is there even such a thing in any modern science?)? How can anybody devise "firm" criteria for putting anything in any of these categories when the effort to discern a false claim versus the number of possible claims is overwhelming. Just like nobody can pursue the validity of any claim due to the high costs and the lack of experienced enough personnel, so one can't make any "statistic" (except highly cease-sensitive and "narrow" field one) of the number of claims versus the number of fraud cases which could has any chances of being reliable?
P.S.As far as the issue of reproducibility is concerned I like to give one particular example. It's a bit hilarious but nevertheless I believe is on the right spot here :)
Consider the possibility there was some "large scale scientific conspiracy (e.g. a case of fraud)" concerning the results from the Large Hadron Collider at CERN. Now, let's say that every single participant in the experiments done there was part of some "grand scientific kabal" designed to mislead humanity about the very nature of our Universe. (I met people on the Internet who actually believe in this claim. :) Then, how can one be certain there is no fraud in the claims made by the scientists participating in the LHC experiments if there isn't any other such devices in the whole world? I tried to actually argue with "scientific kabal believers" such thing is impossible because there are simply too many people from too many countries doing research there and too many are watching the facility to serve as some "fraud facility" to lie to the poor people, but then I go back the argument that until one can build an LHC identical to the one at CERN in their backyard no one could be **certain** its results aren't fraud! Hilarious, isn't it? But, then I was dismayed how can one actually argue with someone putting up arguments like that? The funny thing is I couldn't. How can you convince a skeptic like that? And if you can't convince someone for such a big and visible experiment like the LHC how can you convince him other much less prominent experiments aren't fraud, too? But, then, if you can see fraud everywhere how can you discern it from mere sloppiness? What if you have doubts at the status of the very experts that have to do the discerning in itself? Then, what are you options?
The way I see it one can doubt endlessly in anything and everyone, then how can such an extreme skeptic ever achieve *firm* standards for anything which s/he can't get his/her hands on? And with the state current science is in this is practically impossible. Then, one should never be able to tell when there is a legitimate case of fraud and when things just weren't thought out well enough.
Upvotes: -1 <issue_comment>username_2: If you are asking what percent of experiments are reported as reproducible, when in fact they are not, I would think it is very, very low. The scientific method defines reproducible as an experiment that can be recreated, or performed by others, using the exact method. If these scientists have any conflicts of interests, they must be reported. After any trials of an experiment/study it is reviewed by another group of scientist. If a experiment isn't peer reviewed, it isn't a legitimate study to begin with. It is unlikely that you would be able to find a few, let alone a large portion of the scientific community to be in coercion and conspire to fake a theory.
Upvotes: -1 <issue_comment>username_3: I very much doubt we "know' this in the empirical sense, as:
1. As a scientific problem, the "reproducibility crisis" is fairly new, and we're still developing the methods to really understand it.
2. It's very hard to distinguish fraud from error except in the most egregious cases. You can say, for example, that there's clearly some bias in *a body* of work in a field, but it's hard to say that any particular paper is clearly biased.
For example, is a convergence error that you chose to ignore because it was "pretty close" to the convergence criteria you set for a statistical model an error, or fraud? How about using Bayesian priors that turn out to be stronger than was probably justified? How do you distinguish fraud in the form of "nudging" a result over the line into significance from a largely insignificant effect being estimated with error that sometimes crosses the null.
Frankly, there's not even, IMO, a solid definition of "irreproducible". Is that "Finds a statistically significant result"? "Effect estimates are on the same side of the null"? "Our clinical/policy conclusions would be the same"?
Lacking even that, I'd assert it would be particularly difficult to attempt to assess fraud vs. error in any systematic fashion.
It would also be an *exceedingly* difficult study to run, as you'd need the cooperation of a bunch of people who comitted fraud.
Upvotes: 3
|
2017/03/28
| 1,014
| 3,592
|
<issue_start>username_0: I'm writing a research proposal with a small maximum word count. References are included in this word count. Fortunately, the citation style is not specified.
I was wondering if there is any (widely accepted) citation style that will generally produce the shortest citations. As far as I've found, MLA seems to be the shortest as it switches author names to "et al." when there's four or more authors, whereas other citation styles that I've looked at only switch with a higher number of authors. Is there any style that would consistently produce shorter citations?<issue_comment>username_1: As @<NAME> commented, you will be hard-pressed to find a more compact in-text citation format than [IEEE](https://www.ieee.org/conferences_events/conferences/publishing/templates.html). A single citation is written [1] and several citations [2-5].
The bibliography is then listed in numeric order.
Upvotes: 4 [selected_answer]<issue_comment>username_2: Vancouver is fairly short. [See this guide](https://www.ncbi.nlm.nih.gov/books/NBK7256/). It is particularly common in medicine.
In terms of in-text citations, it is short.
* It uses numbers as citations.
* There are variations. But the shortest is to put the numbers as superscripts without parentheses. You can also include ranges like `1-5` for five references.
In terms of the actual references, Vancouver is also quite terse:
* It uses journal abbreviations, so the length of each citation is often shorter
* It doesn't require dois
* End page numbers often omit the leading number e.g., 258-60 would indicate 258 to 260
* It omits full stops after author initials
* It omits comma between author surname and initial
It is worth noting that many of these features will reduce character counts and page counts but not word counts. E.g., using "J" rather than "Journal" in the journal name will not remove a word. Thus, the benefits of these tweaks in terms of giving you space to add additional content will depend on whether your constraints are defined relative to word count or page count.
As an aside, I think the ease with which you can include heaps of references using Vancouver is a major reason why medical journals have higher impact factors.
Upvotes: 2 <issue_comment>username_3: Coming back to this question when writing a grant with very strict word limit, I discovered that the most concise citation **AND** bibliography style must be the Science (without titles).
Example:
>
> <NAME>, <NAME>, <NAME>, Curr. Biol. 26, 1911–1915 (2016).
>
>
>
You can find the citation-style-language `csl` file in the official [csl styles](https://github.com/citation-style-language/styles/blob/master/science-without-titles.csl) github repository, that you can download and use in Zotero, Mendeley, RefWorks, etc.
If you're an EndNote person, see the style file in the most popular answer to this [ResearchGate question](https://www.researchgate.net/post/Which_is_less_space_taking_short_possible_reference_style)
Upvotes: 3 <issue_comment>username_4: This option is hard to beat when the bibliography is also counting towards your word/page count (e.g. in a grant application).
<https://anton.cromba.ch/2016/02/07/a-minimal-citation-stylefor-grant-proposals/>
Just in case the above page ever goes down, you can download the raw CSL from [here](https://gist.githubusercontent.com/antoncrombach/aba3782c016a11a69a2fbb407a1eacf8/raw/6d12a6ef12f201c543a4217779ca9e9b68da072c/minimal-grant-proposals.csl), save it on your computer, and load it into Zotero/Mendeley.
Upvotes: 2
|
2017/03/28
| 1,397
| 6,023
|
<issue_start>username_0: I will soon be attending a Ph.D program in mathematics. I do not know how common this practice is, but the department has a requirement that graduate students must learn a foreign language and be able to read and write mathematics in that language. The three choices are French, German, or Russian. I personally have no preference on which to learn, but I was wondering if there were other reasons that would make one language more advantageous over the others in terms of a general mathematical career. Thanks in advance in any response.<issue_comment>username_1: The decision does depend on you field. As <NAME> mentioned above there may be original non-translated stuff which is interesting to you. Also there could be more position of field A in country B is something to consider.
Then in Germany/Austria/Switzerland you may even get away in a teaching position in the beginning without speaking any German and in France this is less likely. I don't know about Russia.
Upvotes: 2 <issue_comment>username_2: While it may be true that, if one had to pick a *single* language in which to read mathematics, English would currently be the best choice, ... it is certainly not the case (as many people do claim) that these days there's an English version of anything worthwhile. True, it is *possible* to "get by" reading only English, but there are little problems now and then.
In particular, for mathematics, perhaps unlike computer science, things don't really become "wrong" or obsolete (although they certainly can become less stylish), and until about WWII the cutting-edge writing in mathematics was mostly in French and German. (Depending on the epoch, Russians often wrote in French or German, as did Japanese in those times. English was not the lingua franca... :)
So, for people in mathematics of roughly my age, it was *necessary* to have some reading knowledge of French and German to be professionally competent in those years. And many of the classics of those times have *not* been supplanted by English-language versions, and are still relevant at least as subtle background to contemporary work.
Also, though I don't make a hobby of tracking this, it is my impression that many French mathematicians in number theory and automorphic forms and representation theory certainly do write in French, and at a level and substance that is not at all automagically transferred to English-language writing. And many pieces in Seminaire Bourbaki, a wonderful resource, continue to be in French.
Upvotes: 4 <issue_comment>username_3: Here is my *opinion*. Of French, German, and Russian the former two are much more important than the latter, and you will recognize this if you look into the history of mathematics. Hilbert, Riemann, Jacobi, Noether, Weierstrass, <NAME>, Klein and other members of the Göttingen school (arguably the most influential in the history of mathematics) published in German, while Cauchy, Fourier, Poincaré, Hadamard, Euler and others published in French. Associated with those names is the founding of so many fields of mathematics.. analysis (rigorous calculus), complex analysis, commutative algebra,.. essentially all fields that are the basic cornerstones of mathematics. One cannot say the same about Russian; there are quite a few very decent Russian mathematician, but none is of the same calibre as the mathematicians mentioned above. This makes French and German (by far) the most important languages in the history of mathematics.
In the 1930's Hitler essentially killed German mathematics; Germany's position in mathematics is nowadays merely a shadow of what it used to be. There are still some important German mathematician's, but they are few. And they mainly publish in English (Hopf, Hirzebruch, and Faltings were notable exceptions). This put French mathematics to the front, and there is a countless number of important French publications of the postwar area.. Schwartz's work on distribution theory, Serre and Borel's work in algebraic topology, Serre and Grothendieck's work in algebraic geometry and commutative algebra, Serre's work in number theory. etc. etc.
Upvotes: 0 <issue_comment>username_4: I took an oral exam to satisfy that requirement in my applied math department. I was psyched. I dusted off my French, which I hadn't used in a while. I was looking forward to the exam. To my disappointment, the page I was asked to translate was mostly symbols; the few words on the page could have been machine-translated, they were so straightforward. So in terms of satisfying your department's requirement, if your requirement is anything like mine was (and you can ask your department to give you a sample page ahead of time, to judge the expected level of fluency), pick French or German, whichever seems easier for you. (For most English speakers, that would be French.) (Avoid Russian because of the other alphabet.)
On the other hand, if your goal is to truly learn a new language, then choose from among all the languages in the world, not just those three. Choose it for more than one reason, not just to make more articles in your field accessible to you. And learn it thoroughly -- or don't bother. Speaking 200 words won't give you the benefits of knowing another language.
The benefits of speaking another language well include:
- being able to communicate with a bunch of people you wouldn't be able to otherwise
- understanding another culture, especially its humor, in a way you wouldn't be able to otherwise
- being able to get nuanced meanings from articles and books in your field (note, these nuanced meanings will not be accessible to you without a less-than-superficial study of the language, which typically takes several years to achieve)
If you want to be able to read papers in your field, and you're not sure if knowing another language will be necessary for that, then do a thorough literature search in your area, to find out which other language(s), if any, will be needed.
Upvotes: 2
|
2017/03/28
| 772
| 3,255
|
<issue_start>username_0: I got PhD admission offers from 2 Universities (say A and B) and yesterday decided to accept one of them (University A).
Today I decided to let University B know that I would not be able to join them. After they acknowledged my refusal to join, they asked for the name of University A since they keep track of the same for statistical purposes.
Is there any harm in letting them know? Or is it better to keep it private?
**Edit**
I guess I was fretting over nothing. I sent B a brief reply saying that I was going to accept the offer from A (I've already signed documents with A so I'm reasonably confident they won't leave me in the lurch). B replied wishing me the best of luck at A.
Thanks for the comments! I wasn't sure this was a common thing to ask for universities.<issue_comment>username_1: There is no harm in letting them know. Universities and departments like to know their competition - if Univ B find out that many of their students are choosing Univ A over them, they can think of ways to better compete with them next year.
Upvotes: 5 <issue_comment>username_2: I can't think of any harm in telling them, and in the medium term, this isn't a secret, because it will certainly be clear by the fall where you've gone since your name will probably be put on the department website. If you're uncomfortable, it's certainly fine to wait a bit until you've finalized things at the school you accepted; you don't even need to tell them you're doing that---just wait a bit before replying to the request.
It may help to know that it's standard for schools to want to know where students who reject them are going, so there's nothing unusual or suspicion raising about the question. It's useful information because it helps them assess their program and admissions practices.
Upvotes: 7 [selected_answer]<issue_comment>username_3: I work in a university's market research department. We send out surveys to people who decline an offer, with the aim of understanding:
* Who the university's closest competitors are
* Why applicants declined an offer
* How we can improve
The surveys are anonymised and there are no negative consequences from filling in the survey. Results are presented in the aggregate, i.e. charts of all anonymised respondents are presented in a document to senior managers.
Disclosing this information is optional, has no negative consequences and is a benefit to the university you declined.
Upvotes: 5 <issue_comment>username_4: No, they can figure it out themselves -- and probably do. Their Institutional Research office will periodically submit your Name and Birthday to the National Student Clearinghouse (NSC). Actually, they will probably batch-upload the names and birthdates of all new students and potential applicants to the NSC once a quarter.
The NSC will return a file listing your enrollment dates and degrees earned from any college in the US. This is a common practice at most colleges.
Upvotes: 1 <issue_comment>username_5: If you were rejected *by* those universities trying to get in, would they tell you whom they admitted ahead of you?
Of course they would, right down to the last rejected would-be undergrad.
So you should extend them the same courtesy.
Upvotes: 1
|
2017/03/28
| 1,550
| 6,814
|
<issue_start>username_0: Could one potentially get into a PhD program if they don't take college graduation requirements and instead take more classes in their field?<issue_comment>username_1: There's no answer single to whether you can do this: every school sets its own policy, although the overwhelming majority will not allow you to apply for a PhD without a Bachelor's degree.
However, even if it is allowed, you *should not* do this!
* A PhD involves a large amount of reading, writing, and self teaching. Taking a breadth of classes helps to hone your skills in these areas.
* Real world papers are often interdisciplinary, so having a breadth of knowledge will be helpful in understanding (and writing!) these
* If you have a degree and major in your subject, you will likely have enough knowledge to teach yourself things from the classes you didn't take.
* Electives give you a good opportunity to take classes in things you enjoy but aren't related to your future career
Upvotes: 3 <issue_comment>username_2: Almost certainly not. I have only once heard of someone who did not get their undergraduate being accepted. They couldn't pass the university's swimming requirement, but otherwise met the requirements. They then won the Nobel.
There is a practical issue too and that is filtering. In order to winnow out the applications, graduate schools put a variety of filters in so that faculty do not have to review all applications that are submitted. If you are not getting a bachelors, then you will almost certainly be filtered out and even if you are the best candidate on the planet, no one will read it to know that.
Still inside the practical, your graduation requirements were almost certainly modified by the non-major requirements at your school. To provide an example, we are preparing to do a program review. Part of reviewing the graduation requirements is making sure that things that are necessary to learn are either taught in the program *or* in the core and gen ed requirements.
No academic field is independent. Within-major requirements are impacted by the courses out-of-major. Academic departments are not concerned with courses so much as *skills*. If we know some courses teach certain skills in a course, then we do not need to teach those skills within the major. Core requirements save academic departments effort in that if there is some commonly used skill, it saves money and resources if they are taught as broad skills in a core course rather than as narrow skills in a field course.
By avoiding courses out of your major you may be reducing your skills rather than increasing them. My school requires everyone to take statistics, for example, so we do not require field specific statistical methods, though you could take that course.
Still in the practical, even if you graduate and are qualified you may not get into graduate school. There are only so many spots. You may also hate it.
Outside the immediately practical, it is uncommon to not have to leap across fields in most research. A PhD is a degree that teaches you how to do independent research. If you have no knowledge of other fields, you will not know how they think. If you do not know how they think you will make serious mistakes in your own research.
In my own doctoral work, I had to draw on work from seemingly unrelated fields. None of these are inside my field.
Although not taught in my doctoral program, I found myself concerned with computational algebra because of issues related to significant digits and a computers ability to properly perform operations on very small numbers. I found myself drawn into the debate between the Frequentist, Likelihoodist and Bayesian schools of thought in statistics and even ended up doing reading in the now defunct Fiducial school of statistics. I ended up needing the tools normally taught to historians because I had to unwind how and why things were done the way they were in the past to explain to those in my field how we got to where we are at. I found myself in the philosophy of model building and in the general philosophy of science. I ended up needing to learn programming for massive data sets in a supercomputing environment. I found tools in geology that were very useful in my field, though we do not have anything at all to do with rocks. The Feldspar jockeys are useful for more than knowing what quartz is. I had to pull literature from psychology, sociology and biology. I found myself in neuroscience reading medical journal articles. My work is not in medicine.
If you go through your undergraduate with the plan of getting a doctorate, then you should take as many in your field as you can. Be sure to take a philosophy, biology, chemistry, computer science, and French literature courses too. You may end up stunned that the French literature course ends up being more useful than your field course.
A PhD program will teach you how to think in a disciplined manner and to learn without taking any more classes. You will want a broad base of knowledge going into a doctoral program.
If it is a choice of graduating or taking field courses, then graduate.
Upvotes: 0 <issue_comment>username_3: I agree with the previous answers that graduation requirements typically have real intellectual value, but there's another reason why failing to complete requirements in other fields looks bad to admissions committees: it suggests an inability to make yourself do things you aren't excited about.
This is a crucial skill for having a successful career. You'll periodically have to do things you find uninteresting or unpleasant, and you'll need to do them well enough that they don't stand in your way. What these things are can vary; they might be perfectly sensible things you just don't happen to enjoy (such as writing or lecturing), or meaningless bureaucratic hoops you are forced to jump through. Either way, sometimes you'll just have to do them. If you're lucky this will amount to only a tiny part of your career, and if you're brilliant people might bend the rules for you, but most people are neither lucky or brilliant.
Every so often this derails someone's career. They enjoy and do well at 80-90% of the job, but they fall apart completely on the remaining 10-20%. They just can't make themselves do it, and they can't find a way to avoid it. This doesn't end well.
I don't want to invest years in working with a grad student unless I'm confident they can do what they need to do to have a successful career. If an applicant is too reluctant to complete graduation requirements (even requirements that I personally agree are excessive), it's a worrisome sign. I wouldn't consider this factor decisive by itself, but it would make me look at the whole application skeptically for signs of trouble.
Upvotes: 2
|
2017/03/28
| 908
| 3,760
|
<issue_start>username_0: [I apologize if it has been asked, I think it must have been but I couldn't find a similar question.]
I often receive invitations / calls for papers to write papers in journals. 95% of these cases are obvious spam publications, but in about 5% I see a letter that's not entirely generic, e.g. they mention my previous publications in the subject\*\* and they write something that appears genuine about the publication. They also write the name of the editor in chief.
My question is how to assess this new journal's quality? I mean, nice journal pop up once in a while and just because it's a new journal, does not mean it's necessarily "fake news".
Here are a few points I thought about:
* Wait a few months/years until I see reputable people publish in it, but then I will have missed my shot, possibly.
* Inspect the editor's reputation. If she/he appears reputable (e.g. have published in journals I know to be good), then the journal is okay.
* Inspect the publisher's "parents", for example if it's ACM, Springer, IEEE, etc., then maybe it's worthwhile (but then, I can't say definitively either way).
I would appreciate any points on the subject.
\*\* yes, these messages can nevertheless be generated automatically.<issue_comment>username_1: When 95% is junk and the other 5% is sufficiently irrelevant that neither you or your supervisor and colleagues know about them, a safe and efficient strategy is to **ignore all journals that actively request contributions.**
I have also never heard of a legitimate journal using mass emailing to recrute editors (this is more a habit of author-pay open access journals that are looking to increase their market share).
Yes this strategy, if adopted broadly, would prevent some new journals from being known. The good news is: we already have way more journals than we could possibly wish for. It's good content and quality curation we're short of.
Upvotes: 4 <issue_comment>username_2: First, if you're only considering 5% of the invitations to publish in journals you don't know about, then it's probably a small number of them. Let's say it's 3-4 per year. That's sufficiently low to expend some effort in checking them out. So, why don't you...
1. (Assuming it's not open-access) Check with your departmental/university library to see whether they carry/used to carry this journal. If they have, pick up the latest/last copy and leaf through it; or if it's available online through your library's website, do it that way. If you can spare the time, give it the benefit of the doubt and leaf through a coupe of issues.
2. Typically, at least the table of contents will be available online even if you don't have any access privileges. Check the titles and author names of a recent, but not just-out, issue. Do you...
* Recognize titles that were cited in papers you appreciate?
* Recognize authors you hold in high regard?
* Recognize a notion/technology/theory that you find important? (And if so, was not presented thoroughly/at all before this journal was published)
if so, consider buying a single copy of that issue from your research budget or getting the library/department/research group to buy it; then it's back to suggestion 1 above.
3. Ask people you know - especially if you've noticed their names, or names of common collaborators of theirs, in TOCs of journal issues - about the journal.
4. Look for information about the people on the journal's editorial committee.
5. Look for information about the organization publishing the journal. This is mostly a lead for finding other kind of information or for getting an idea of why the journal is not better known.
**Note:** The above are not necessarily in the recommended order of action.
Upvotes: 2
|
2017/03/28
| 1,472
| 6,701
|
<issue_start>username_0: I'm kind of embarrassed to talk about this, but I've attended quite a large number of schools throughout my undergraduate career. 6 schools, in fact. Most of these schools I attended for 12 or fewer credits- I was dealing with an undiagnosed chronic physical illness, which I've now totally overcome, but which was disabling at the time.
I didn't know what I was dealing with then so I kept trying to push through school and having to withdraw from classes/schools and re-enroll somewhere else. I'm doing much better now, have brought my total GPA up to 3.52 (with a strong upward trend), and have a 4.0 GPA in my major (psychology). I'm studying hard for the GRE's and I trust that my recommenders will be able to provide strong recommendations for me. I know that my GPA could be better, but what I'm mostly worried about is the large amount of transfers on my transcript.
How should I deal with this? Should I mention in my personal statement that I was dealing with chronic illness and have overcome it? I'm worried that admissions committees will think I'm oversharing, but I feel that something as weird as 6 undergraduate institutions requires an explanation. I'm hoping this won't disqualify me from PhD programs, since I really love research in psychology and would do anything to pursue it further.
From comment: One of the schools was just somewhere I took a single course at while in high school. I left two of the schools because I lost the merit aid I needed to afford attendance due to withdrawing, and one because it was an online school and I was unhappy with the quality of the education I was getting there. I then went back to my local community college and started over, and transferred to my state university from there. I have research experience, and am planning on getting some more after undergrad. I'm hoping I can overcome this but I'm really scared.<issue_comment>username_1: On first appearance, this does seem like a significant negative of your application. I have a couple suggestions for you.
One, make sure you follow the graduate school's instructions on submitting previous school's transcripts. Some graduate applications may ask for only the receiving degree transcripts. If some of the previous five schools' credits did not transfer and were not related to your degree, you might not have to mention those at all. Be careful though, as not including them when you are supposed to is considered a misrepresentation of yourself, and could invalidate your application (and even potentially your later graduation). On the other-hand, you may be asked to provide each and every full transcript from each of your previous schools.
If you do need to include the transcripts from each of the six schools (or they do all appear as transferred credits), I would highly recommend you include an explanation in your application. Their biggest concerns will be if this number of school changes indicates an inability to commit, the tendency to give up rather than work hard to learn something difficult, and/or the presence of a serious level of indecision about your direction. These are all serious issues for the successful completion of a PhD, in which you can expect long and difficult work on the same subject. You do not need to give much detail about your medical issues, but be sure that your explanation sufficiently explains the information given to them. For example, repeated withdrawals and restarts or a longer completion time makes clear sense in the context of a recurring medical problem, but why *six* schools and not one or two? Medical withdrawals and restarts at the same school would seem like a more reasonable option, which could lead them to doubt your justification.
Emphasizing your continued upwards gpa trend and your commitment to the area of psychology over the years will also help combat some of these concerns. Any out of class research will help you enormously as well as it shows you are willing to spend additional time on the subject, have the potential to succeed at PhD level research in general, and have experience to know you enjoy and are willing to commit to psychology research specifically.
Upvotes: 2 <issue_comment>username_2: When describing the reason for your less traditional background, you are right to avoid "inappropriate personal disclosure", but as you've written it now is pretty good. You want most of your application to be strictly positive, so keep it short, simple, and clear.
For example, you might talk about how you came to realize that this field is right for you. You can note that when you were younger you struggled with an undiagnosed chronic medical issue, and that you attended multiple schools in trying to find a way to succeed in spite of the limitations you experienced. You were finally able to obtain successful treatment in year XXXX, and you successfully returned to Your Institution. Then continue telling your success story of all the great work you did and why you are such a great prospect, and you don't really need to return to belabor the point.
Are there some people who will balk at having the nontraditional background you have? Sure - but there are plenty of people who respect struggle, persistence, and finding a way to overcome hardship. If your record is clearly much better after the point you name, then people reviewing your application can clearly see that, "well, they had issues before, but the problem must be solved because they've done great since then". You aren't the only one, and all the rest of the advice for grad school applies to you just fine otherwise.
I believe the key is that you need positive factors that are more recent and outshine the negative of having issues in your earlier undergrad experiences, and you show consistent positives which help to establish that clearly the issues you had before must not be an ongoing concern because you are doing well.
From personal experience, I found that having backup plans was one of the best ways to fight the fear of getting rejected from everywhere, even as you apply broadly to a number of programs to give yourself the best chance of finding the right combination of factors to get that "right fit". But it can be helpful to plan in advance what you'll do if the first round of grad school apps don't go your way, as the unknown is the scariest part. With your history of persistence in the face of adversity, remember that you've seen first hand that there is more than one way to succeed - even if it means you might not have the most direct path, there are always alternatives. Of 1000 possible paths, you only need to find one that works for you.
Upvotes: 2
|
2017/03/28
| 537
| 2,332
|
<issue_start>username_0: I was accepted into a Masters program (MS in Computer Science) at a nice university that has research groups working on the things I want to research (natural language processing).
However, looking through their course catalog, it seems they don't actually have any courses directly focusing on NLP. I like almost everything else about their program except for the lack of courses, though they do have courses in machine learning and related concepts.
Is it a bad idea to go to a school that doesn't have courses related to my exact focus?<issue_comment>username_1: Ask if they have some kind of "joker" courses (i.e., "Special topics in XYZ", can be taken several times for credit, contents to be arranged), and see if you can persuade somebody to teach it (might take gathering enough interested students, or just ask the professor nicely). Be warned, this will probably turn out more of getting some hints dropped in weekly meetings, studying on your own, get graded for some longueish term project, than "regular classes".
Upvotes: 0 <issue_comment>username_2: I don't think it's necessarily a problem. At the graduate level, you're expected to learn things in many other ways besides formal courses: self-study, informal seminars, independent or supervised research, individual discussions with faculty or other students, thesis work, etc. Likewise, you'll have evidence of your learning in other forms besides course grades: published papers, your thesis (if it's a thesis-based program), recommendation letters from supervisors, etc.
So this isn't necessarily a red flag. However, it is worth bringing it up with prospective advisors or others in the program: "I want to work in NLP. What opportunities exist for building skills and doing research in this area?"
One note is that besides being accepted to the program, if you want to work in a particular research group, the faculty leader(s) of that group will have to agree. And there could be obstacles unrelated to your abilities - too many students in the group already, not enough funding, etc. So if you are really tied to working in this particular area, then before deciding to attend the program, you should talk to the group(s) that work in your area of interest, and find out if it is feasible for you to work with them.
Upvotes: 2
|
2017/03/28
| 2,106
| 9,308
|
<issue_start>username_0: I got a postdoc position and I am moving to a new university. I was contacted by my new professor about choosing a laptop that the department would buy for me.
The option they have is roughly the same model I have currently as my private machine, and the one that I have used to finish my grad school. Thus I find another laptop unnecessary, but also a hassle (having two computers means installing the same stuff twice, etc.).
My question: is it a bad idea to refuse their offer?
Additional information:
* I don't have problems using my private computer for work, nor mixing personal and work stuff on the same machine. I did that during my graduate school.
* But also I understand the pros of getting a work computer (e.g. if the laptop is broken or stolen it will get taken care of) and that my response of refusing their offer may be strange.<issue_comment>username_1: It surely matters where you are.
In the U.S., at many universities, "privileged information" of various sorts is supposedly not ever to be kept on "personal", as opposed to "institution-owned and maintained" machines. Or, as in some comments, as soon as you do have work-related data on your personal machine, that machine becomes liable to Freedom-of-Information warrants, and you yourself can get into various sorts of trouble for insufficiently guaranteeing security.
A similar issue exists (depending on jurisdiction, etc) with regard to email accounts. My (U.S. R1) university account is subject to search without too much probable cause. Some of my colleagues hesitate to have any substantive-sensitive discussion by email because the University's policy is that we are *not* to delete any such email, but preserve it indefinitely. That kind of thing. (No, it's not clear how this would be enforced, nor what the impact of "pleading ignorance/technical-incompetence" would result in. Maybe it's just CYA policy on the part of the institution.)
Upvotes: 8 [selected_answer]<issue_comment>username_2: First, I would strongly suggest that you have a work computer and a personal computer, and then keep those two separate for legal reasons. Although this is not the place for legal advice, and there are many other factors to consider, you should know that in general:
1. Your employer owns your work computer, and can legally confiscate it at any time and for any reason. Thus, you should consider any personal information you have on your work computer to be accessible by your employer. This includes personal information like tax forms and private correspondence. It also includes information you might not want your employer to have, like criticisms of the administration or job offers from other institutions.
2. Academics tend to have many varied endeavors inside and outside of their academic profession. Your employer probably has a very strong claim to the intellectual property rights of anything you create on their computer, even if the IP does not relate to your university job and even if you're only using generic software such as Microsoft Word.
3. Your position at a university may expose you to FERPA or HIPAA protected information, and your university may have specific expectations about how you access that data. My university insists that all laptops use whole-disk encryption because the loss or theft of unencrypted student records is a major FERPA event that must be disclosed to the government and/or public.
Second, there are some practical and legal problems with retaining your own computer from a software licensing point of view.
1. There's a high probability that a lot of the software on your current computer should no longer be used according to common academic licensing agreements. This is definitely true for certain specific software such as MATLAB, which are generally licensed to the university for use by university students and employees (this is called a "site license"). Since you are no longer a student of that university the license demands that you stop using that software.
This all depends on the specific licenses that your university has negotiated, but the situation above is very common. It almost certainly applies to any proprietary technical software you've got, and there's a very good chance that it also applies to any proprietary productivity software you've got (e.g. the Microsoft Office suite, VPN software, etc.). It may in some cases apply to the operating system itself, though this is less common today than it used to be.
2. The reverse of the above situation is also a problem. Many university licenses stipulate that software may only be installed on university owned computers or on student computers. As a postdoc you're no longer a student, so their IT department might balk at installing any work-related software on your personal computer.
Upvotes: 6 <issue_comment>username_3: It's probably best to write to them saying you would prefer to us your own computer and ask whether that's an option.
If they do require you to use their machine, you could order one with very similar or compatible hardware to yours, image the disk of your old laptop to an external hard drive, and restore on the lab computer. No installation or configuration necessary.
Upvotes: 3 <issue_comment>username_4: Certainly there is no harm in accepting the laptop/computer from your office/ university. Keeping seperate things for office and personal use is good practice. For any software issue or defects etc you can look towards university people. And moreover u need not carry it daily if there is safe place in office where you can keep daily after leaving the office
Upvotes: 2 <issue_comment>username_5: Try asking them for something else, instead. Perhaps they don't provide you with a desktop computer? You might exchange one for the other, and have IT manage your desktop in case you'd rather avoid the hassle.
Alternatively, ask for some other research budget allowance to be made available.
There's no certainty you'll be obliged but it's worth trying.
Upvotes: 3 <issue_comment>username_6: There are two main benefits of using your personal laptop for work:
1. You have administrative access to the system. This can be really helpful, if you frequently install new software. IT departments often have good reasons for not giving administrative rights to the users.
2. Depending on your field, you may find that research projects last longer than postdoctoral affiliations. Hence it can be unwise to rely too much on employer-provided resources to do work.
On the other hand, extra laptops are rarely harmful, especially if they are similar to the ones you already use. Make your working environment easy to synchronize across multiple systems, and life becomes easier when you lose a laptop or move to the next university.
The above only applies to research that you do more for yourself than for your employer. You should use employer-provided resources for teaching, administration, and other duties, which you do directly for your employer, as well as for handling sensitive data. This remains true regardless of whether you are formally required to do so or not.
Upvotes: 3 <issue_comment>username_7: About using personal computers for work - ignoring any possible legal arguments, I would advise taking a work computer if you have got the option.
First it can save you the hassle of lugging it around as you can transfer data and emails between them using cloud/web services (be it private or public cloud); it also saves both of them from wear and tear.
Talking about wear and tear, your personal computer may also benefit from:
1. Not being used so much;
2. Not being carried around.
Talking from experience, when I entered my last job they did not have the option to have a Mac / MacBook Pro, and I used my own during two years. It ended having a LCD problem ensuing a small fall and the laptop being moved around every day.
I also lived nearby, and I wanted to walk. Often it was a hassle or depending on the time of the day, even insecure to carry a computer with me.
Nowadays, I have a MacBook Pro at home, and other at work, one has 5 years, the other has 4.
A top-of-line work notebook can be costly, and accepting a work notebook, depending on the part of the world, can signify saving between at least 2000 to 5000 dollars at the end of a few years.
As for the question of reinstalling everything twice, one notebook used to be a clone of the other for years. Lately I do prefer to have two distinct devices, at home I have more software installed that I paid from my own pocket, than at work.
Upvotes: 3 <issue_comment>username_8: You seem to see two choices: Accept their computer, or refuse to accept it. The obvious third way is to tell the professor that you already have a laptop that is practically identical to what they want to buy for you, and ask whether the expense is really necessary.
Often it is: If their laptop is locked down for security reasons. If they want the right to get it back at any second with no research material stored on your private computers, and some other reasons. By asking you either save them money, or you will be told the reasons why you can't use your own laptop.
So "refusing" to take the laptop is a bad idea. Making a suggestion is much better.
Upvotes: 0
|
2017/03/28
| 1,615
| 6,464
|
<issue_start>username_0: I'm a first-year PhD student in a computer science department. As is usual at this school, my first three quarters are spent rotating with different professors, finding an adviser and a research fit that I like. However, unlike previous rotations, I can see myself working in this subfield for my PhD, and I've moved from wondering what subfield I'll be working in, to wondering how to get acclimated in this field.
I've always been a person to run an "outer loop" of self-reflection, advice-asking, and habit-forming. In undergrad, I followed a few blogs about making the most of college. However, it's been a lot harder for me to find useful advice about doing top-quality work in getting a PhD.
I hear that in the first part of your PhD, students are significantly less "productive" than in the fourth year and onward. That makes sense, and I'm becoming comfortable with the banging-head-against-wall feeling that is creeping up on me. **However, I'd like to know what the three years of "unproductive" time teaches you to do, so I know what I should focus my energy on during these years.**
I've heard it's important to read papers, develop a "taste" for useful problems, and hone in on a larger research question. However, each of those has a lot of follow-up questions that few people seem to talk about. Reading papers: how many? what about? what for? Taste: how do I get as many "data points" as possible in learning what's a useful problem?
Edit: I'm at school in the United States, and I'm unsure whether I'd want to go to academia, industry or run a startup. If I were to bet right now, I'd say 50% academia, 30% startup, 20% industry.<issue_comment>username_1: To get the ball rolling, I will offer an answer. I will invite others to edit my answer to improve it, or write another answer.
* In the early part, spend about 5 - 10% of your time looking at journal articles and other scientific literature (if you are on top of your coursework).
* Form one or more study groups, if for no other reason than to get practice communicating about your subject area.
* Attend seminars.
* Visit office hours.
* Try to keep your notes organized.
* If there are foundational exams in your program, get some old exams and start looking at them, as time permits.
* Find fun ways of getting exercise.
Upvotes: 4 <issue_comment>username_2: In addition to @username_1's excellent advice, I would suggest:
* Establish a 'keyword' list of research terms as you read through the relevant papers. This becomes important as it will help you refine your research as time progresses.
Other advice, based on my own experience:
* Be prepared to collaborate with your supervisor to write/publish papers based on your work (I did this, published 5 articles before graduation).
* Allow some time for other interests (e.g. music)
* Create a physical workspace (or several).
* Make time for family and friends (this is a very important factor as they are often your support).
* maintain proper sleeping patterns (time and duration)
Upvotes: 3 <issue_comment>username_3: >
> I've heard it's important to read papers, develop a "taste" for useful problems, and hone in on a larger research question. However, each of those has a lot of follow-up questions that few people seem to talk about.
>
>
>
The reason why those questions are not typically answered is because they are (a) very individual and (b) impossible to put a concrete number on. I would argue that focusing on "how much I still need to read" is not the right question to ask anyway.
>
> Reading papers: how many?
>
>
>
Until you know what the common research directions are, and what the state of the art and open problems are in the one(s) that interest you. Frankly, most students seem to typically stop primarily reading and move on to primarily doing when they get bored reading because they feel they know most of the important stuff anyway.
Generally speaking, I tell my students that it's better to read 1 good paper than 5 mediocre ones. I would also initially not recommend taking the most complicated paper you can find and trying to understand *everything*. Initially, breath is more important than trying to "get" every little detail.
I also suggest that you start thinking about what you could see yourself doing as early as possible. Do you see any follow-up questions, and do you have an idea how they could be answered? Do you see yourself conducting a similar study to the one explained in the paper? What do you still need to learn to do such a study?
>
> what about?
>
>
>
Initially: very broad. As soon as you get a feeling for what kind of papers are of particular interest to you: those.
>
> what for?
>
>
>
For three reasons: (1) to learn what scientifically the state of the art is, (2) to learn what research methods are commonly used to address which problems in the field (and, implicitly, what the typical expectations in terms of scientific rigour are, e.g., related to sample sizes), and (3) to learn how to write up and sell your research in your community.
>
> Taste: how do I get as many "data points" as possible in learning what's a useful problem?
>
>
>
Primarily by reading broadly. I am not sure what else you can do.
Upvotes: 2 <issue_comment>username_4: I will offer one additional piece of advice that I think is important:
* Set up and tune to your liking several search alerts that notify you of interesting new papers in the literature.
This is essentially the only scalable way of keeping up with research today. There are many flavors of search alerts, including
* Journal TOC alerts: gives you a title & authors list of every paper in the latest issue of a specific journal. Only useful for the key journals in your field.
* Preprint alerts: e.g. on the arXiv you can set up daily alerts with new papers in the field you are interested in, these are title/authors/abstract lists.
* Topical search alerts: e.g. on ScienceDirect you can set up search alerts for custom keywords. You can use this to track papers on a specific topic, papers by specific authors, etc.
* Non-paper-based: e.g. mailing lists for specific research interests/software/etc. that you want to follow, RSS feeds of blogs by scientists in your field, etc.
You probably will want to start by adding many of these, and then unsubscribe from the ones you find less useful. And your interests will probably change over time.
Upvotes: 2
|
2017/03/29
| 2,357
| 9,561
|
<issue_start>username_0: I'm writing a research paper and there's a large amount of code and data related to the study which may be of use to those who would read the study. They can confirm the correctness of the techniques, modify it for their own purposes, perform new analysis on the data, etc.
All these relevant files exist in a public repo, but I'm uncertain what's the best way to reference these. Would it be a citation? Just drop the URL directly into the paper? Other ideas?<issue_comment>username_1: If the rest of your paper is using MLA/APA/Chicago/etc. citation, then these sources should be treated no different. You should maintain consistency throughout your paper.
Upvotes: 0 <issue_comment>username_2: **Short Recommendation:** Include the URL in-text followed by a full citation and end of text reference. If using someone else's repository, do the same, and probably also include a citation to the accompanying paper, unless the authors request otherwise. Place this citation in a prominent position in the method. If the open repository is a big part of the contribution of the paper, consider also including the URL in the abstract.
### Longer Answer
Researchers in my field of psychology are starting to use the [Open Science Framework](https://osf.io/) as a repository for archiving data, code, and other materials. These repositories include a short URL that is meant to be stable (e.g., <https://osf.io/5krfq/> ).
When referencing these repositories, I have read papers that just include the URL, and others that include a full citation. For example, the full citation for the above using APA style might be:
>
> <NAME>., & <NAME>. (2017). Abrupt Strategy
> Change Underlies Gradual Performance Change: Bayesian Hierarchical
> Models of Component and Aggregate Strategy Use. Retrieved from
> <https://osf.io/5krfq>
>
>
>
So in summary, I don't think the standards for data and repository citation have been formalised sufficiently yet.
I think there are a few considerations:
* If there are any questions about the stability of the repository URL, then you need to provide more information. For example, if the repository is hosted on a standard university website, then it is quite likely that the URL might change over time. Even in the case of something like the OSF, it might still be safer to include additional information. References work well because there is redundancy.
* In general, providing a URL in the main text makes it easier for the reader to see that the repository is readily accessible.
* In many performance evaluation systems, citations of papers are counted, whereas citations of other resources may not be. This may be changing, but it is worth thinking about.
### Referencing existing repositories by other authors
If you are referencing an existing repository that other authors created, then you should look to see what requests these authors made. For example, they may want you to cite a particular paper that is linked to the repository. More generally, there is a arguably an ethical and professional obligation to acknowledge those providing the repository by using a full citation.
### Referencing your own repository that accompanies a paper
If you are creating your own repository that is linked to the paper you are writing, I quite like the idea of including the URL in text as well as the end of text reference. E.g.,
>
> Data and code is available at <https://osf.io/5krfq> (Wynton & Anglim, 2017).
>
>
>
And then you include the end of text reference as above.
This has the benefit of making the URL very clear to the reader (e.g., encourages the reader to click the link) but it also has the benefits of full citations (e.g., the reference is more robust, creates a practice of citing data and code as equal to citing papers).
**Note about blind review**: If you are submitting your manuscript to a place that does blind review, you need to do another step during the review process to prevent disclosure of author identities. One approach is to put a black mark over the author names in the in-text citation and end-of-text reference. OSF also has the benefit that you can create a custom-URL that provides a read-only view of the repository with author names removed. When such a feature is not provided by your repository service, you might need to instead attach anonymised versions of the materials and black out the link.
**Where to put this citation/reference?** There is another issue of where to put this reference. E.g., Abstract, author note, first sentence of the method, some other section of the method, etc. There are several considerations:
* More people will see it if you put it in the abstract. The author note is another relatively prominent location. Thus, if you see the repository as fundamental to the contribution of your paper, then you may want to include the url in your abstract, and then include the full citation somewhere in the method.
* Some journals have conventions regarding this. e.g., some journals have badges for open data or open materials, some journals ask for a section in the method with a label like "Open Practices". If so, then it makes sense to follow these conventions.
* If you want to draw some attention to your open data and materials (but putting the url in the abstract feels excessive), then I think the first sentence of the method possibly under a section heading like "Open Practices" is a good option. Thus, it will be clear to any reader that gets to the method that these materials are available.
* If you think the repository is not that important, then it could be placed in whatever section seems most content relevant (e.g., end of the participants/data description section or somewhere in the section discussing the data analytic approach).
Personally, I like the idea of "Open Practices" (or something similar) becoming a standardized section in manuscripts where the authors explain what materials have been made open or are made to justify why they are not open.
**What name to give to repository reference**: Another issue, which I have not yet resolved, is what is the best title for repositories that accompany a paper. Possible titles:
* Identical title as accompanying paper
* Related title to accompanying paper: e.g., "Data and code for [insert paper title here]" or "Supplementary materials for [insert paper here]"
* Something very descriptive: e.g., "Data and code examining ..."
If you use the identical title as the accompanying paper, this creates the potential for ambiguity when people cite the paper or the repository. However, it might make it easier for people to find. And if getting citations to your original paper rather than the repository is particularly important, then it may be the case that citations to the repository will get counted towards the paper depending on how the citation engine (e.g., Google Scholar, Scopus, etc.) matches articles. Such repositories can also often host a pre-print of the article. In that sense the repository becomes almost a landing page for the actual publication that is not behind a publisher's paywall.
From a descriptive perspective, I think that something like "Data and code for [insert title]" seems better. It makes the link with the accompanying paper very clear, but also makes it clear that it is a distinct academic artefact.
Upvotes: 3 <issue_comment>username_3: I typically have a subsection at the end of my methods section that is like this:
>
> ### Reproducibility and open source materials
>
>
> To enable re-use of our materials and improve reproducibility and
> transparency according to the principles outlined in Marwick (2016),
> we include the entire R code used for all the analysis and
> visualizations contained in this paper in our SOM at
> <http://dx.doi.org/10.17605/OSF.IO/RTZTH>. Also in this
> version-controlled compendium are the raw data for all the tests
> reported here, as well as additional regression diagnostics and power
> tests. All of the figures, tables and statistical test results
> presented here can be independently reproduced with the code and data
> in this repository. In our SOM our code is released under the MIT
> licence, our data as CC-0, and our figures as CC-BY, to enable maximum
> re-use (for more details, see Marwick 2016).
>
>
>
That specific paragraph can be found in this published paper
<NAME>., et al. 2017. Movement of lithics by trampling: An experiment in the Madjedbebe sediments, northern Australia. Journal of Archaeological Science 79:73-85. <http://dx.doi.org/10.1016/j.jas.2017.01.008> preprint: <https://osf.io/preprints/socarxiv/7a6h6/>
And minor variants of that paragraph can be found in most of my recent papers. By including this kind of paragraph in my publications, I am trying to fulfill the recommendations in this paper, which echoes ideas found in many other similar manifesto-like papers on reproducibility:
<NAME>, et al. 2016 Enhancing reproducibility for computational methods. Science 354(6317):1240 <http://science.sciencemag.org/content/354/6317/1240>
The Marwick 2016 citation in the paragraph above is my in-depth discussion of computational reproducibility for archaeology:
>
> <NAME>. (2016). Computational reproducibility in archaeological
> research: Basic principles and a case study of their implementation.
> Journal of Archaeological Method and Theory, doi:
> 10.1007/s10816-015-9272-9, preprint: <https://osf.io/preprints/socarxiv/q4v73>
>
>
>
Upvotes: 1
|
2017/03/29
| 1,380
| 5,948
|
<issue_start>username_0: I am close to an academic who is facing heavy criticism for having used an unorthodox grading scheme in a final exams (there was a multiplication involved to ensure a good grade would mean decent success in two independent parts).
My main question is:
>
> **Q0:** what are the main guiding principles you use when designing a test and a grading scheme? Are there any clear limitations imposed to you (legal, moral, or traditional; internal or external)?
>
>
>
Let us assume that the tests handed out are completely anonymous, so that it is *given* that the grading only depends on the test handed out, and on no other element.
In particular, of course discrimination by gender or race or religious beliefs is not acceptable, but no need to mention it since (proper) anonymity should prevent it.
I would also like to gather a diverse array of ways to test and to grade. This type of "big list" question is sometimes seen at MathOverflow, I don't know if it is welcome here but I think seeing a variety of grading scheme and exam designs would help inform the answers to Q0.
>
> **Q1:** What kind of exam design and grading scheme did you actually use (or enforce if you did not grade but coordinated graders)? What are the motivation behind this choice, and what did you conclude from the outcome? Multiple answers for various pairs of exam design/grading scheme is better unless they directly benefit from being gathered.
>
>
>
It is been asked in comment to specify, so let me add that I am more interested by scientific fields. The question clearly makes sense more generally, but is already quite broad.<issue_comment>username_1: I've designed a number of assessments, and also have had recent experience of a rather complex grade calculation going wrong for some students, so I would say first of all that it needs to be **kept simple**, so that it is transparent to students how you have arrived at the mark. Of course, there needs to be a close **correspondence to the curriculum/learning objectives**. We also try to think about applying [Bloom's taxonomy](https://en.wikipedia.org/wiki/Bloom%27s_taxonomy), so that questions mix the descriptive and the evaluative.
* Q0: We have some constraints, some of them being mandated, but others really an attempt to standardise the rubrics within the department where I work. One of these, which makes a fair amount of sense, is to use an **assessment grid**, so that for each thing the student is graded on (e.g. level of critical analysis), there is an example of the kind of quality level expected within each grade boundary. We also have spent a lot of time thinking about **group work** and how to fairly grade it. A common approach for us now is to allocate an overall mark for a student group but also have an individual element, or allow for an adjustment based on individual performance. This is hard to get right, as you need to avoid a) team members not pulling their weight; but also b) one "stronger" team member taking all the work on themselves.
* Q1 These days I avoid exams in the main, and use mostly assessed presentations, vivas and poster presentations. The grading scheme typically is something like 70/30 aspects of between content and presentation. As I teach HCI & Information Science, there often needs to be evidence of following a *structured design process*, rather than merely giving this process lip service. This is almost always clear to see in the student work if it has been done well / at all.
Upvotes: 2 <issue_comment>username_2: My experience comes from pure math in the US, with no clear limitations. My approach depends on the course, and I usually try things a little differently each semester, but my main design principles are:
* first, determine what are the main things I think a student should definitely be able to do upon finishing the course (e.g, for differential calculus: compute limits and derivatives, understand the relation with tangent lines, simple optimization problems), as well as things I would like students to be able to do (e.g., more complicated optimization problems, logarithmic differentiation, conceptual mastery, ...). this gives me a set of benchmarks for the students to both get passing grades (meet minimum requirements), as well higher-tiered benchmarks to get B's or A's.
* try to make an exam that tests as many of these things, as independently as possible, and in varying level of difficulty, with a constraint that a very good student should be able to finish in about half the allotted time
* try to make questions that are straightforward to grade, both for ease of my grading as well as to improve uniformity of grading
* try to use problems similar to what they have seen before on homework/previous exams/in-class examples, but for the most part slightly different,
As for some ways I (attempt to) put this into practice (say for calculus):
* make true/false to test conceptual understanding (effect: most students get very close to the same number of true/false right, so this has little effect in differentiating grades except for a few students who do very well or very badly)
* for the other problems problems (some of which the students may have to show work for, and some of which I allow them to omit work if they wish), essentially the grading scheme is: full credit (very minor errors allowed), half credit, or no credit (effect: grading is easy, and I think rather fair)
* what I need to give out at the end are letter grades (A, B, C, D, F); based on the exam questions and the benchmarks I have in mind, I set cut-offs for the number of problems a student I think should get right to get, say an A or a C, and convert this into a number of points. (in practice: I often find I there are too many low grades, and end up looking at individual final exams to reevaluate what I think various students deserve, and revise my cut-offs after grading)
Upvotes: 2
|
2017/03/29
| 447
| 2,227
|
<issue_start>username_0: I'm a PhD student about to begin writing my first research paper for a software engineering conference. My research is about an approach to measure the performance of software systems. I will evaluate and compare this approach to other state of the art approaches that measure performance to compare the performance of the approach itself and the accuracy.
First, I wanted to know what the goal of research questions are in research papers. Once that is clarified, I believe I would be able to decide if my paper should include research questions.<issue_comment>username_1: A research question is a very important way to frame the main focus and uniqueness of the research paper. It is essential to make sure the research question is focused and relevant, so that the research presented stays 'on topic'.
This is one of the first things that people read, particularly reviewers as it clearly and concisely shows the reader precisely the refined focused purpose of the research presented in the paper. Without it, there can be ambiguity in what the research is about and how the findings are concluded.
Upvotes: 0 <issue_comment>username_2: I think it is important to understand, especially as a new author of research articles, that unless otherwise specified, the term "research question" does not necessarily need to be a sentence or sentences that end in a question mark and actually ask a specific question. On the contrary, a research question is most often not a question at all, but rather an explanation of why the work you are describing is important and how it will advance your field forward.
For example - and this is particularly true within the scientific community: the entire "Introduction" section of a research article acts as the research question. In other words, this section typically gives a background of the research area, explains what work has already been done to advance this field, focuses on what still has yet to be done, and then concludes with what parts of what has yet to be done will be addressed by this particular paper. As you can see, this presents a research question in a very practical way without actually posing any direct questions at all.
Upvotes: 2
|
2017/03/29
| 540
| 2,193
|
<issue_start>username_0: I work full time as a Chief Operating Officer for a company and would like to pursue a PhD and study the population I am working with (women workers in fisheries).
I have been researching different programs in Gender Studies centers, mostly in English speaking countries.
I want to know if it is possible to pursue a PhD while working full time and living in another country? And if there are universities that are more flexible than others?<issue_comment>username_1: I would say that it is almost impossible to pursue a PhD while you are living in another country. It would also be impossible to pursue a PhD while working full-time. PhD's require full time dedication because you will be required not only to work on your studies, but also contribute to the university's research advancements, and perhaps even being a TA for undergraduate courses. I highly doubt there are any accredited universities that would be that flexible with you working full time and living in another country.
Upvotes: 2 <issue_comment>username_2: In regard to living in another country, at least in the US, most universities have a residency requirement (for example, [Cornell's](https://www.cs.cornell.edu/phd/requirements)) that means you have to live there for some amount of time.
As far as working full-time, sure, it *can* be done. It will be extremely difficult and it may be a challenge to convince an advisor to work with you. A PhD is hard enough doing it full-time. Part timers seem to take 7-9 years in my field and finish with fewer publications.
Something that a potential advisor will ask you is **why** do you want a PhD if you already have a job and are going to continue working in that position? A PhD trains you to do research but you aren't really doing research as a CTO are you? If you just want knowledge in the field, then take some classes or study on your own.
Upvotes: 2 <issue_comment>username_3: I’m doing it. It’s rough. I work very long days and am publishing, but it’s quite painful and I’m afraid it may be hard to compete with others with teaching experience. Don’t do it if you can avoid it. Do one or the other and give it your all.
Upvotes: 1
|
2017/03/29
| 927
| 3,422
|
<issue_start>username_0: I recently became aware that, when writing an article, my university's communications office reserves the prefix "Dr" for only those individuals with MD degrees.
For example, if <NAME> holds a PhD, an article published through this office would state, "<NAME> researches memory..." instead of "Dr <NAME> researches memory" or "<NAME>, PhD, researches memory". But if Jane holds an MD, the statement would be written, "Dr <NAME> researches memory."
The communication office's response to this is that "Dr" is most often associated with medical doctors; also, the AP guide recommends this approach.
First, is this a phenomenon in other institutions? I am at a large public university in the US, for reference. Second, the office may be correct that "Dr" is associated most with physicians, but does anyone have a citation for that? Third, I couldn't find this in the AP guide--any hints on where to look?<issue_comment>username_1: I don't have access to the [AP style guide](https://www.apstylebook.com/), but from what I can piece together "Dr" is a reserved, and required, title for medical doctors.
[USD](https://www.sandiego.edu/its/documents/about-documents/brief_ap_style.pdf) says about AP style:
>
> Academic degrees. Use the abbreviation Dr. only before the name of a person who holds a medical degree. Do not use the title Dr. before the names of people who hold other doctorate degrees or honorary doctorate degrees. In those cases, the degrees should be listed after the person’s name. (<NAME>, PhD)
>
>
>
The [Prudue OWL](https://owl.english.purdue.edu/owl/resource/735/02/) says:
>
> For example, Dr., Gov., Lt. Gov., Rep., the Rev. and Sen. are required before a person’s full name when they occur outside a direct quotation. Please note, that medical and political titles only need to be used on first reference when they appear outside of a direct quote.
>
>
>
This implies that "Dr" is a medical title.
This is consistent with what I can find on the [AP style guide says](https://www.apstylebook.com/ask_the_editor_faq):
>
> See "doctor" for an explanation of Dr. as a title for medical doctor on first reference.
>
>
>
Overall I think your University is strictly interpreting the guidelines by only using Dr for MD degrees and leaving out other medical degrees (e.g., DDS and DO put possibly also DAu, DBH, DC, ND, DNP, DOT, OD, or others on [this list](https://en.wikipedia.org/wiki/List_of_doctoral_degrees_in_the_US). A quick search reveals lots of press offices at US schools follow the AP guidelines.
Upvotes: 2 <issue_comment>username_2: The AP style guide does say that "Dr." is used to indicate medical doctor, and that academic degree (<NAME>, Ph.D.) is used to indicate doctorates. That said, if you don't like that policy, you can point out:
1) It is standard *etiquette* to use the title a person prefers, and rude to do otherwise.
2) The AP style guide is just one convention among many. Lots of institutions have their own policies (e.g. NASA's communication office has the exact opposite convention, referring to Ph.D.'s as "Dr." and referring to medical doctors as "<NAME>."). Several high-profile press institutions (including the New York Times and Wall Street Journal) do not follow the AP style guide in this regard as well.
<https://en.wikipedia.org/wiki/Doctor_(title)#United_States>
Upvotes: 3
|
2017/03/29
| 1,742
| 6,736
|
<issue_start>username_0: I'm currently working for a professor in a research institution. The typical interaction with my advisor is as follows:
>
> Me: Hi! How was your weekend/conference/trips?
>
>
> Prof: Good. Do you have any results to report to me?
>
>
> Me: (Frantically take out my report) I've done this, that, and that like what you've told me.
>
>
> Prof: Good. Keep working on it.
>
>
> Me: (Immediately leave the scene and get back to work)
>
>
>
While I understand that I cannot force him to have any casual conversation with me, it's bugging me that we talk about nothing but research everyday. I have almost no clue about who he is or what he does outside research.
Based on other questions that I've read in this site, I have a feeling this is a norm in academia, though my relationship with my undergrad research advisor was much more friendlier and casual than my current advisor.
My question is: **How can I develop a more casual relationship with my advisor? Is this even desirable in academia?**
Perhaps this is just my personal preference, but I'd like to have a more casual relationship with someone that I see everyday and talk to everyday. His office is located 5m away from my workspace, and I report to him 1-2 times a day. We also bump into each other numerous times in the corridor, and exchange awkward hi. I don't like to work with someone if I view him as an authoritative figure that only demands results from me, and I often get fearful to see him or talk to him, which I don't think is healthy.
He is not a super-introvert that doesn't like talking to people, because I often see him laughing, talking, and joking around with other professors. That has never happened between two of us.
Somewhat related, but not really: [How to deal with an advisor who wants a “friendlier” relationship with me than I do?](https://academia.stackexchange.com/questions/28631/how-to-deal-with-an-advisor-who-wants-a-friendlier-relationship-with-me-than-i)
[How to maintain a good relationship with advisor when there is no need for it but I want it?](https://academia.stackexchange.com/questions/26870/how-to-maintain-a-good-relationship-with-advisor-when-there-is-no-need-for-it-bu)
EDIT: This is in US, and he does not have any students right now, though he has had students in the past.<issue_comment>username_1: You stated a key point
>
> Perhaps this is just my **personal preference**, but I'd like to have a more casual relationship with someone that I see everyday and talk to everyday.
>
>
>
It could very well be that his personal preference to take time to foster the kind of relationship that you describe - it could also be a case of him not feeling comfortable have a casual relationship with his employees/advisees. There is no way to really know how long he has known the other professors that he speaks and laughs with.
You can not and should not force it. This is very important, some people recoil when people try and force these kinds of things.
To be honest, based on the conversation snippet, it sounds like he is not being rude at all, but just is focused on the task and on maintaining positive progress.
It is not a bad thing to have a strong *working* relationship - a polite and productive relationship is beneficial to many working relationships. I would suggest to focus on that aspect.
Upvotes: 5 [selected_answer]<issue_comment>username_2: My advisor was rather driven. I suppose if he hadn't been, he wouldn't have accomplished nearly as much as he did.
There were times when he was ready to open up, and times when he wasn't. Once, for example, he asked me to drive him to the airport. I went to the wrong airport because he was telling me how he made it through food shortages in the immediate postwar period and it was fascinating. (It was okay, I realized my mistake and got him to the correct one in time for his flight.)
In contrast, I remember running into him on a Sunday when we were walking up the steps of our building, heading to our respective offices. I was elated about a concert I had been to the night before, but I could see that he hadn't even taken in what I had said. He was completely focused on getting to his office and progressing with whatever he had in his head that day.
My spouse is like that too.
Don't take it personally.
Upvotes: 4 <issue_comment>username_3: Although it is certainly true that many faculty are socially maladroit, it is also true that there is a definite delicacy in the mentor/mentoree relationship. It is not a peer relationship, and it is far from equal in terms of "power". Thus, even with the best of intentions, it cannot truly be a "peer" relationship, even if "friendly". And I think that it is important, just as in good parenting, that as mentor/superior, one be absolutely sure at all times to be squeaky-clean, and adamantly assume the persona that one knows one should.
Sure, this does not directly *preclude* "more genuine" personal interactions, but, actually, it is completely mandatory that one stay within certain boundaries.
Some faculty have a subliminal understanding of this, but/and through nerdiness cannot behave in a way that accommodates this complication-to-life. Others clumsily ignore it, and may "hit on" their students and postdocs...
Seriously, there truly are (in my opinion) several delicacies in personal relationships with people over whom one has great power. That enormous power differential is inevitably an elephant-in-the-room, and to pretend that it's not is, for the more-powerful person, violently irresponsible, in my opinion.
So, there is a range of ways your supervisor may be "impersonal", and they are not at all necessarily negative.
(Also, as I have gradually come to understand, substantial age differences do entail substantially different perceptions, and different implicit interpretations of situations... who knew?)
Upvotes: 2 <issue_comment>username_4: Maybe there are written university/departmental guidelines on staff-student engagement.
Maybe the professor believes that a casual banter with a student is a mere overture to a personal relationship unwanted by him/her.
Maybe the professor has been deceived in the past when he/she allowed the supervisor-student conversation become too casual/personal.
Maybe the professor knows that in academia - as in all work situations - *playing the man* can be a lazy subordinate's way of rising through the ranks at minimal expense of effort.
Maybe the professor wants you to go out into the world - away from academia - find a person you can truly merge minds and souls with so that your life (and academic work) may blossom as soon as possible.
Maybe all of the above.
Upvotes: 0
|
2017/03/30
| 639
| 2,691
|
<issue_start>username_0: I'm writing a paper comparing the performance of 2 algorithms (lets call them A and B). A performs better than B, however preliminary data suggested a way to significantly improve the performance of B. I made the changes to B and sure enough now B outperforms A. I've verified that it's not just a fluke of the specific inputs I was testing against, making the changes to B leads to consistently better performance.
What is the best way to write these findings up? It feels an awful lot like cherry picking to me (especially since I didn't know when I started that I could improve B and for all I know there are improvements to A that could make it better than the improved B)
But despite its questionable origins the modified algorithm does perform the task faster and I want to include that information in my paper.
What is the best approach to take?
Call the modified version Algorithm C and just add it to the paper?
Add this information to the appendix?
Something else?<issue_comment>username_1: I would comment instead of answering, but I can't, so here it goes. What I would do is state what has actually been done, which is a project in two steps: (1) enhancing the algorithms as much as possible [you have already done this to B, but not A yet, so do it] and (2) testing for efficiency. This is consistent and clean, and looks perfect even though you started with something else in your mind. (If a research project could predict *everything* one would find during research itself, no research would be needed.)
Upvotes: 2 <issue_comment>username_2: Consider this:
**What question is your paper trying to answer?**
This is the real question to consider. If I were writing a paper about average milk production of cows and my question was "How much milk does the average cow produce in a year?" and then I only selected the 10 best cows, then I am *really* answering the question "How much milk do the 10 best cows in my selection produce in a year?".
If your paper's aim is to compare two or three algorithms, then there's no problem. If you write a paper about `A` and `B`, it's no different from writing a paper about `A` and `C` or `A`, `B`, and `C`. How you arrived at the algorithms does not change question you are answering.
Upvotes: 3 [selected_answer]<issue_comment>username_3: So you have A, B and B'. If A and B have been already presented in the literature but a comparison has not been made, an article presenting B' and comparing it with A and B seems useful. There is no cherry picking, since you are not ignoring data. It may be helpful to include your thoughts on potential improvements to A or B that you did not pursue.
Upvotes: 1
|
2017/03/30
| 616
| 2,927
|
<issue_start>username_0: I have recently completed my Ph.D. and would be joining the industry in an analytics profile where publishing is not mandatory. I want to keep on publishing to ensure my return to academia if I feel like it someday. Can I keep sending papers to journals and conferences using my old academic affiliation?<issue_comment>username_1: When you are really not affiliated to the institution anymore, you should not put it on your publications. Certainly not, when the work you are publishing has not been initiated during your stay at the institution.
Many institutions have the possibility to offer you a 'zero hours' contract. In this scheme you remain affiliated to the institution, and can use their facilities (e.g., email address, etc), but do not receive any monetary compensation.
If you agree to such an arrangement with your previous institution, you obviously can still use the affiliation on your publications.
Upvotes: 4 [selected_answer]<issue_comment>username_2: I have a colleague in urban studies who continues to publish research even though she is not currently in academia and runs a non-profit. She lists the non-profit she directs as her affiliation on publications. In her case, it appears impressive that she is a "practitioner-scholar." I don't know what your exact discipline is, but if your current job is okay with you publishing your work while employed there, why not list your employer? That doesn't affect the quality of your work. You can also credit your former indtitution in the suthor's note or author's bio, if there is one. Hope this helps!
Upvotes: 2 <issue_comment>username_3: It considerably depends whether the research you are publishing/presenting was done at your former institution or not.
Case 1: the research was done at your former institution
--------------------------------------------------------
Your former institution supported your research (office, computer, lab, ...). The number of publications is an important metric for research institutions when they are evaluated. If you did your research at that institution, it is fair to list it as the affiliation of you or of one of the co-authors. Thus, the publication is counted as this institution's publication.
However, your former institution (head of department or similar) needs to be informed about the publication and should be asked in advance, whether it is desired to indicate the institution as your affiliation.
Case 2: the research was done after you left your former institution
--------------------------------------------------------------------
In the second case, your former institution should not be mentioned because there is not connection between your current research and that institution.
However, if you are still using resources of that institution (e.g. a high performance computing cluster), you should discuss it with the responsible person there.
Upvotes: 2
|
2017/03/30
| 1,024
| 4,327
|
<issue_start>username_0: I do experiments on learning and instruction for my dissertation. They are almost no risks in participating in our studies--except maybe boredom since they tend to run for 30 minutes to 1 hour. We get a lot of unmotivated students who don't really take the experiment tasks seriously. But in spite of this, they never get penalized if they complete the experiment. That means they get their compensation (money or course credit) and we don't check their answers before they walk out of the lab.
However, what should I do if a student withdraws participation? Should they still be given monetary compensation or course credit? Would it be ethical to specify that they will only be compensated if they complete the study? This hasn't happened to me yet, but I am running an online study and it will be harder to keep track of whether participants really complete the study or not.<issue_comment>username_1: I don't see any ethical problem in not compensating people for work they don't do. However, you still have to frame the problem in a clear way in order to avoid frustration/litigations. Just write down explicitely that a student will receive compensation only if he/she completes the test.
As for motivating students to give you meaningful answer, that is a completely different story, and a much more complex one.
Upvotes: 3 <issue_comment>username_2: I have served on the IRB at two research institutions. The first question I would ask is what does your consent form state about how subjects are compensated if they withdraw? As of this point, if you do not indicate such in your consent form, you should compensate the subjects what you promised (whether they withdraw or not) until the end of the study or you amend your IRB protocol.
If this is a problem with a number of subjects, I suggest amending your IRB protocol. Is your study is minimal risk, an approval should (typically) not take a long amount of time. It is not uncommon to prorate compensation. For instance, those who initiate the study, but do not complete the study can receive half or some other fraction of the compensation. If your IRB approves this, it would still compensate all participants, but give additional incentive for completing the study.
You could amend so that those who withdraw from the study do not receive any compensation, but IRBs tend to frown upon this, so you may waste time waiting for the IRB to request a change to your amendment.
Whatever you do, you should not break protocol from what your IRB has already approved. You can also talk to a representative of the IRB and ask what they suggest for an amendment - this could approve your amendment request faster.
Upvotes: 3 [selected_answer]<issue_comment>username_3: Providing course credit and monetary compensation are very different.
In terms of monetary compensation, the IRBs that I am aware of want to make sure that the money does not influence the subject's consent. Paying a subject $1000 for an hour of their time would probably be considered an undue inducement. Paying a subject $1000 for 100 hours of their time would be considered a reasonable reimbursement for their time and inconvenience. If, however, the payment is based on completing the study, then after they complete 99 hours of the study, they would effectively be paid $1000 for that last hour and the IRB is not going to be happy. For a 1 hour study, an IRB might not go either way; $10 or whatever the payment is, is never likely an undue inducement, but $10 is also not an unreasonable inconvenience allowance for 15 minutes. Whatever you decided, you need to get IRB approval.
Class credit is a whole different issue (cf. [Are there ethical guidelines prohibiting penalties against potential research subjects who have not yet provided consent?](https://academia.stackexchange.com/questions/38290/are-there-ethical-guidelines-prohibiting-penalties-against-potential-research-su/38300#38300)) In my experience, students need to be given an alternative that provides the same learning objectives. In my old department we decided that that learning objective was not meet until the study was completed. Therefore, no class credit was given until the study, or alternative assignment, was completed. Again, whatever you decide, check with your IRB.
Upvotes: 1
|
2017/03/30
| 899
| 3,563
|
<issue_start>username_0: I am an undergraduate student. This semester, I'm taking a 1.5 credit course, which is a 1.5 hour lecture once a week from a rotation of different speakers. There will be a final at the end of the semester.
The course coordinator, Mack (Not His Real Name™), cannot come to every lecture. He therefore asked me to give him my notes, both for his review and also as as a basis for the final exam. Because my notes will be a part of the test, he asked that I 1) do not tell anyone that my notes will be a basis for the test 2) do not share my notes with anyone else.
A number of students noticed that I take notes during lecture, and have approached me to ask for a copy of my notes. They're not [constantly asking for notes](https://academia.stackexchange.com/q/78650/33256); in fact, at least one of these students (who I know to be a very hard worker in school) *does* take his own notes, but wants to compare notes just in case he missed something (this is something that he does in every course).
I'm having some difficulty convincing them that they don't want my notes. I told them that I write in partial shorthand (which is true), and that the notes are just rough notes (also true), but some students say they want my notes anyway. I suppose I could always "forget" to give them my notes, but this is neither foolproof nor polite.
**How can I politely refuse to share my notes with my classmates in this situation?**
---
I am aware of the [XY problem](https://academia.stackexchange.com/q/78650/33256) -- I can imagine a solution to the situation where I share the notes anyway, or ask Mack for permission to share my notes. However, I'd like to help Mack (his alternative is to listen to 15 x 1.5 hour recordings), while still remaining on good terms with my classmates, and not ruining the final exam.<issue_comment>username_1: Just say, "sorry but I'm not comfortable with that." If they pressure you, or ask for reasons, focus on how rude it is for them to do that. Be polite and gentle, but if it comes to it there is no harm in saying "what is wrong with you, I said no. Grow up and leave me alone." As for getting along with your classmates, it wont seriously damage any relationships with anyone that truly matters. Decent people will not care, because decent people do not pressure others into things they do not wish to do.
Upvotes: 0 <issue_comment>username_2: I think Mack put you in a very difficult situation. You should not have agreed to his terms, and should ask to change them now.
There is no problem sharing your notes with Mack, though he should probably be getting notes from several students.
You should either have agreed not to share your notes with others, or agreed to keep confidential the fact that they will be used in setting the test, but not both. If you were freely sharing your notes, nobody would pay any special attention to them. If you could say "Mack asked me not to share them so that he can use them in setting his test" people would understand the non-sharing.
There are other ways Mack could have constructed his test. For example, he could have asked each lecturer for a question or two with marking criteria.
Upvotes: 4 <issue_comment>username_3: Based on your agreement with Mack, you can't share your notes, and you can't tell people why you can't share your notes. But you can say: "I can't share my notes because I agreed not to. I'm sorry." It's certainly strange, but saying you agreed not to talk about it gives a definitive reason you can't share the notes.
Upvotes: 1
|
2017/03/30
| 445
| 1,978
|
<issue_start>username_0: I have two studies that are not related to each other but in the same domain (Social Media Analysis) and I already submitted one of them to a conference as my name the main author.
-Can I submit the second paper where I am also the main author to the same conference event? I have read the conferance regulation but did not find any related information for such an issue.
EDIT: I also tried to contact them, but since two weeks, still no answer and the deadline is within 2 days. And normally if it ok to submit two papers, will I be having two-time slot for the presentation, or will it be one slot to manage both?<issue_comment>username_1: If you haven't found anything in the conference regulations, the best solution is to contact the organizers directly and ask.
Since nothing is specified against it directly, you can *try* and submit two papers. However, keep in mind that they may nonetheless decide to disqualify the second submission (for example, if they simply did not think to add this rule to the submission guidelines, which can happen).
Upvotes: 2 <issue_comment>username_2: **Yes,** unless the conference regulation specifies a different policy, it's fully OK to submit multiple papers as main author to the same conference.
Upvotes: 3 <issue_comment>username_3: For many conferences (though, I don't know if it is the norm in your discipline), it is common to submit two papers as a first author/presenter, unless the conference states that it prohibits this. In your case, it does not. When you state that you are the "main" author, there is a difference if you are the sole author or the first author. Some conferences do not want someone to submit multiple papers as a presenter, because it can create difficulties in scheduling the sessions. In these cases, they may allow multiple submissions as a first author as long as it indicates that a different person is presenting each paper.
Good luck!
Upvotes: 3 [selected_answer]
|
2017/03/30
| 521
| 2,117
|
<issue_start>username_0: I do a B.Sc. degree project in computer science at KTH Institute of Technology. The work included a fair amount of source code written in C and NuSMV and the report is somewhat technical even for a CS report. We had a peer review this week where I got to know
* put the problem statement up front and clearly
* adjust the language for a non-technical audience
* put definitions and acronyms in a glossary
* use the shared latex template
* remove some duplicated text
* update the illustrations with text and numbering
* scale down the images so that they look better
* maybe put in an appendix (with the code)
Do you agree that the source code should be in an appendix or is it better to put it inline or even maybe not include the code at all?
If you want to you [read the draft from this link](https://docs.google.com/document/d/1ow75ewth9CH8EsliX6kcRIZKcVm_vNdwCGc_34jISsg/edit?usp=sharing).<issue_comment>username_1: You should ask your advisor - and I find suggestion #2 bizarre, to say the least.
Upvotes: 1 <issue_comment>username_2: Putting it into an appendix (maybe not in full but the core parts of it) seems the best solution. This is not only some sort of "proof of work" but gives examiners also an opportunity to assess whether the results generated by the code are indeed correct, so it helps with verifiability. This will be particularly important with code that generates numeric information, which you use to draw conclusions.
There are only very few cases where code should be put inline in the main body of the thesis, e.g. when you invented a new language or a new library and want to show how much easier / more readable / more concise / .. something can be formulated in your language / library and when this was indeed the main purpose of it.
Upvotes: 2 <issue_comment>username_3: You should not include the source code at all, nobody will ever read it.
If the program you developed is indeed useful for other people (which is very unlikely for an undergrad project), put it on github/bitbucket etc and provide a link in your report.
Upvotes: 2 [selected_answer]
|
2017/03/30
| 1,701
| 6,543
|
<issue_start>username_0: I have read some posts regarding academic salaries such as
[In which countries are academic salaries published?](https://academia.stackexchange.com/q/26487/452) but none about retirement pension plans.
What is a typical retirement pension plan for professors in the United States?
I assume there might be some significant differences between public and private universities. Research/study/survey that tried to quantify it on a larger sample is welcome.<issue_comment>username_1: AFAIK, all pensions are calculated basically the same, and I'll use Alabama, Tennessee, and Georgia as examples since I'm familiar with them. In the public system, these are often pooled between all universities and colleges and often with the K-12 system or all public employees.
You have three variables: the eligible salary, the number of years worked, and the multiplier.
The eligible salary is calculated through a variety of methods, but tends to be the average of the highest X years of work over the last Y years. For example, in Alabama, it's the average of the highest 3 years of the last 10 years. In Tennessee, it's the average of the highest consecutive 5 years. In Georgia, it's the average of the two highest consecutive years.
The multiplier is probably the single most important variable, though. My father, for instance, worked in Alabama, with a 2.0125 multiple. Let's say for instance his eligible salary was 100k to make the math easy. If he retired with 10 years of service, he would receive ~20k per year. With 20 years of service, ~40k, etc. For me in Tennessee, my multiplier is only 1.5. If when I retire I have an eligible salary of 100k, with 10 years of service, I'll only make 15k. With 40 years, I'll make 50k. (Yeah, my dad got a *much* better deal, and that's before we talk about the [DROP program](http://www.rsa-al.gov/index.php/members/trs/drop/)). Georgia, for reference, has a 2.0 multiplier.
A minor difference thing to consider are time to vesting (Tennessee is only 5, but Alabama and Georgia are 10), but over an entire career, those don't make much of a difference for overall payout.
Because the ways to calculate the eligible salaries (highest two years really helps folks in departments with rotating chairs), I'm not sure how well one can generalize about the pensions. One thing for sure is that they tend to be quite decent since you also collect social security benefits. Anecdotally, for instance, between his pension with 30 years worked, social security and differing retirement tax rules, my dad takes home more now than when he worked. Plus in academia we tend to be able to work well into old age which really boosts our pension income.
Based on what I've read, private pension funds work the same way, just with ever so slightly different variables.
Upvotes: 4 <issue_comment>username_2: In some states, there are public retirement/pension plans that the university will contribute to on your behalf as a state employee, but there are differences across states. For instance, I used to work at a university in FL and selecting the teacher's retirement fund in Florida was optional. In Alabama, where I am now, it is mandatory.
If a university is a non-profit, they will often offer a 403b or 457b plan, but contributions from the university vary. These work similar to a 401k plan at corporations.
Just remember that state governments control a lot of the retirement plan options in many states and can change after you take a job. When I took my job in FL, the university contributed 13% of my salary to my 403b without a match requirement. The state government changed a year after I started the job and reduced the employer contribution AND required a mandatory match (talk about bait and switch!). I'm in Alabama now and participate in the mandatory state retirement plan (which I must contribute to), but also take the option of an additional 403b. So, it's all different, but you often have multiple options at a single university.
Upvotes: 3 <issue_comment>username_3: As mentioned in some of the other answers, this depends on the state you're in. In Texas, when I started as a postdoc in 2002, I had the once-in-a-lifetime option of either going with the state Teacher's Retirement System, which is defined benefit plan similar to the ones discussed by @guifa in one of the other answers; or going with a 401(k) into which I would pay 6.75% of my salary and the state matches that with 6.5% (now 6.65%). Given that the terms of the state-run system are subject to what the state legislature feel like doing, I went with the latter. It also has the great advantage that you do not lose your benefits if you later switch to a university in a different state.
Colorado has a similar system, except that that there was no choice for university professors to join a state-run system -- only the 401(k). In Colorado, you contribute 8% and the state 11%, which sounds more generous than TX, but the situation is in fact more complicated because, for difficult historical reasons, Colorado does not participate in the Social Security System for its state employees, and so I will not get Social Security benefits for my time here.
In both cases, my universities offered the option of contributing to a 403(b) in addition to the 401(k), but the university did not match anything there.
Upvotes: 2 <issue_comment>username_4: In the U.S., preparing for retirement is often quite different now from how it was 30 or more years ago.
Now, many workplaces, including a lot of universities, have pushed the responsibility to the employee.
I will describe my spouse's plan, in a large university, where my spouse started working in 2000.
The university contributes 10% of my spouse's salary to a retirement investment. A nonprofit financial service called TIAA (Teachers Insurance and Annuity Association) receives the funds every pay period and invests the money. But it is up to the employee to instruct TIAA how to invest it. TIAA has a bunch of different options to choose from, which are kind of like mutual funds -- different mixes of types of investments. The basic ingredients are stocks, bonds and real estate funds.
My spouse can put additional money into the retirement investment if desired. Up to 10% of the annual salary can be invested per year, as pre-tax dollars (i.e. not subject to income tax). That's the thing called "501 K."
If you want to learn more about how it works, you can look at the TIAA website, and you can also call them up.
Upvotes: 2
|
2017/03/30
| 608
| 2,588
|
<issue_start>username_0: I would like to say thank you to a professor for a conversation we had when I was not in a good place. However, this is > 2 years ago now. I also have not been in touch, so I'm not sure he will remember.
Lastly, I will need a reference at some point in the future, and I am going to ask this professor.
I am worried that saying thank you now will look like I am just getting in touch because of the reference (which I will possibly have to ask for in already a month or two).
Could it be interpreted this way, and should I maybe just leave this now? I am worried it will look very fake and like I am only emailing now to secure that reference. At the same time I would like to say thank you, which I was not ready for before.<issue_comment>username_1: How about if you put it the other way around? Ask for a reference first and add later something like "I'd also like to take this opportunity and thank you for ..." and explain your situation. Then it at least seems less suspicious, since you are asking for something first.
Alternatively, ask for the reference in one email, and if you get a reply, thank him/her in a second email.
Upvotes: 2 <issue_comment>username_2: Since you mentioned that you don't immediately need the recommendation letter I would recommend the following:
-Get in touch with the professor in a casual way: perhaps you saw an article online that reminded you of him/her which you wanted to share, perhaps you just wanted to catch up a little bit. Don't mention the recommendation letter, but do mention your future plans (let's assume you need a recommendation letter for graduate school)
-Use this opportunity to reminisce of the time you talked with him/her and use this as a platform for your apology.
Then, if the professor's answer is positive in tone (and perhaps even inquisitive regarding your graduate school plans) you might use this opportunity to ask if it would be possible to get a recommendation letter. If you sense brevity or negativity, it might not be a good idea to ask for the letter.
Personal Experience:
I found myself in a similar situation not so long ago. I needed a recommendation letter for a manager for whom I had previously worked for. Luckily it was around January so I used the opportunity to send my New Year's wishes. Unfortunately, he was on vacation and replied a month after. At that point, I didn't need the letter anymore. But his answer gave me a reassuring feeling that in the future it wouldn't be awkward to come back to him and ask him for said letter.
Upvotes: 3 [selected_answer]
|
2017/03/31
| 1,349
| 5,588
|
<issue_start>username_0: Many MOOCs have a system where enrolled students get points for submitting homework, and for grading a fixed number of their peers' homework (On a MOOC i took earlier, each student had to evaluate at least three other randomly selected submissions).
I'm wondering if anyone has used this approach in a physical classroom with handwritten homework submissions. Apparently [this is legal in the US](https://academia.stackexchange.com/questions/65761/one-student-grading-other-students-assignments-in-the-same-course), but I'd like to know if anyone has first hand experience with managing the logistics.
I'd keep a dedicated 'tutorial' hour for this where I could solve and discuss the homework problems on the board, and students would grade problems. Perhaps the most selfish benefit is that I am spared the drudgery of grading an scoring a stack of homework.<issue_comment>username_1: This is not illegal in the UK, but is not a usual practice. Students are paying their fees in assumption that they will be provided with high-quality feedback that helps them learn and improve their work. If the students end up solving problems provided by some MOOC (which they can access outside of the University) and mark each others' work (which can be arranged without a University), then the role of the University reduces to merely stamping their exit awards, and the value of the students' degrees would be compromised.
Upvotes: -1 <issue_comment>username_2: Peer evaluation (offline) is common in many universities and schools.
It is beneficial for students to see what their peers have written and to learn from their own and their peer's mistakes and strengths.
However, a well-formulated evaluation matrix would be necessary for objective evaluation.
In my University, short papers are almost always evaluated by students themselves. There is blinding and we even attempted three independent, blinded evaluations and awarded the maximum marks obtained if and only if all the three evaluations were within 15% difference range.
Hope this helps.
Upvotes: 2 <issue_comment>username_3: Avoiding a stack of scoring is a completely valid reason to do this.
I have about 80 students this semester. If I spend 10 minutes evaluating each student in a week then that's 13.3 hours per week. If you consider that I'm physically in class for 9 hours per week and reviewing and preparing for those classes is another 9 hours per week, then I'm already at 31 hours before you start thinking about other obligations.
And then you sit back and realize that a 10 minute evaluation isn't time for anything more than marking problems right or wrong and scribbling a few pointers for next time.
Upvotes: -1 <issue_comment>username_4: (In my field, mathematics, mostly graduate-level, in the U.S., at R1 university) although this may be marginally legal, and does have some constructive features, it also has many failings. An even more entrenched situation is that slightly-more-advanced grad students (or, worse, those whose command of English is too poor to be in a classroom as Teaching Assistant!) are assigned as "graders" for homework and exams for graduate-level courses... indeed, to "spare" the instructor (whose raises and status are based on "research", not "teaching") the burden.
After some decades of wanting to believe that this system could work, I have eventually come to see that it does not. Thus, if nothing else to show that I put my effort where my rants are, I do give frequent feed-back, comment-on and return promptly homework, ditto exams. Yes, I do also discuss (an anonymized form of) foibles I see in homework and exams. In particular, this system (while much more effort) is nearly-infinitely more informative about the actual state of the grad students' minds than if I merely had numbers reported to me. (And surely vastly more useful to students than much-belated or cryptic grading schemes.)
Also, again based on some decades of observation, the critiques that students could/do give each others' work (even with the best of intentions) is ... duh... quite naive. At best, literal "logical correctness" (already perhaps challenging) is not usually the same as "conceptual optimality", which is really the high-end goal for people who want to be professional mathematicians. That is, it is possible to be "correct" but, nevertheless, so suboptimal as to be "wrong" by any utilitarian criterion.
So, yes, it is good to rhetorically ask students to appraise things their peers have written, but with me (e.g.) as intermediary, not only "blinding" the consideration, but also *guiding* the appraisal in a way that not only accomplishes the immediate task, but is robust and re-usable, and perhaps not completely-obviously-so to the novice.
A related issue is that "prelim problem solutions" are often put on-line by grad students with good intentions, but in many cases (and not easy to distinguish by beginners) these solutions are very awkward and unconvincing... but, ironically, intelligible to other beginners. Meanwhile, often, what we (=faculty) want people to learn is how to do a more sophisticated thing, as opposed to muddling-by with a crappy technique.
(And, another irony, often the most assiduous people are the ones who accidentally believe that they can or should resolve all issues with their bare hands, rather than benefitting from a few centuries of experience of many quite able people. And can often-enough muddle through, and feel exhausted, thus reasoning that this must have been a virtuous action.)
Upvotes: 2
|
2017/03/31
| 2,822
| 10,862
|
<issue_start>username_0: Suppose I review someone's paper anonymously, the paper gets accepted, and a year or two later we meet e.g. in a social event and he/she asks me "did you review my paper?". What should I answer? There are several sub-questions here:
1. Suppose the review was a good one, and the paper eventualy got accepted, so I do not mind telling that I was the reviewer. Is there any rule/norm prohibiting me from telling the truth?
2. Suppose the review was not so good, so I do not want to reveal. What can I answer? If I just say "I am not allowed to tell you", this immediately reveals me... On the other hand, I do not want to lie. What options do I have?<issue_comment>username_1: This is a good question...
In general terms, you should not tell him/her, as peer review, unless open (eg in the [BMJ](http://www.bmj.com/)), it is by definition confidential.
If you are willing to, but wish to remain on the safe side, check the peer review policy of the journal for which you reviewed the manuscript. If there is an explicit embargo (e.g. 6 months after publication), then you can probably tell him/her.
Consider also that in the era of Kudos and Publons it will become easier and easier to identify peer reviewers.
Upvotes: 1 <issue_comment>username_2: I think it is inappropriate of someone to ask you if you reviewed their paper. As you point out, if you are in a position where you would only confirm if you were positive, and declining to respond implies a negative review, you are essentially forced to confirm unless you explicitly deny being a reviewer. In fact just having your jaw drop as the question is asked is probably confirmation enough. So I think the right answer to the question is an immediate *Whether I reviewed that or not is confidential, and that's an inappropriate question*.
That being said, I'm not aware of rules that would prevent you from acknowledging the review. As I mentioned in a comment, for conferences I've seen people explicitly unblind themselves in reviews by choice. If you wanted to acknowledge it, I think you could. I've occasionally admitted that to someone in the past, but not because the author asked.
Upvotes: 5 <issue_comment>username_3: You could say something like
>
> "I am not in the habit of telling people whether I did or did not review their papers, sorry"
>
>
>
or
>
> "I don't feel comfortable answering this question."
>
>
>
Or you could defuse the question with a humorous answer, e.g.
>
> "I would tell you but then I'd have to kill you"
>
>
>
or
>
> "I don't remember, I always take an amnesia pill immediately after reviewing a paper"
>
>
>
or some similar kind of obviously nonserious smart-assery. None of these answers provide any useful information to the asker, and all of them convey some level of disapproval on your part at being asked, making it unlikely that the asker would press the case any further.
Upvotes: 8 [selected_answer]<issue_comment>username_4: >
> 1. Suppose the review was a good one, and the paper eventualy got accepted, so I do not mind telling that I was the reviewer. Is there any rule/norm prohibiting me from telling the truth?
>
>
>
I'm not aware of any such rule. You can tell them if you want but I would advise against it. It just feels weird: is the person trying to set up some kind of coterie of friends who write positive reviews for each other?
>
> 2. Suppose the review was not so good, so I do not want to reveal. What can I answer? If I just say "I am not allowed to tell you", this immediately reveals me... On the other hand, I do not want to lie. What options do I have?
>
>
>
The best policy for answering questions where the possible answers are "No" and "Yes but I'm not allowed to tell you that, so I'm going to refuse to answer, but then you'll know I mean yes" is to just refuse to answer in all cases. Security agencies call this policy [neither confirm nor deny](https://en.wikipedia.org/wiki/Glomar_response) (NCND). You don't need to explain why you're refusing to answer; just be polite and say that you never answer questions about what papers you have and have not reviewed. If the person doesn't accept this, they are being rude and you don't have to keep talking to them. If you feel you have to keep talking to them (e.g., they're a senior professor in your field and you don't feel comfortable just walking away), you can explain about NCND but, honestly, they already know that and they're being a jerk.
Upvotes: 4 <issue_comment>username_5: There's a principle I go by now that took me a long time to learn, mainly because it involved a lot of unlearning things I'd internalized since childhood:
**It's always okay to lie when someone asks you an inappropriate question.** And usually it's the best answer, especially if you have it prepared, since the whole point of the inappropriate question is usually to extract information from you involuntarily through your reaction.
As such the best answer is just "no", regardless of whether you did. If you feel comfortable with the power dynamics, you can follow up with something like "You know, really, you shouldn't go around asking people that. If they did happen to be the reviewer, it would put them in a really bad position."
Upvotes: 5 <issue_comment>username_6: There's a Warren Buffett quote (in The Essays of Warren Buffett) that I think is a good answer:
>
> If we deny those reports but say "no comment" on other occasions, the no-comments become confirmation.
>
>
>
In other words, tell them that your **policy** is to neither confirm nor deny.
Upvotes: 3 <issue_comment>username_7: I am aware of at least one paper where a referee went out of cover (after the review process of course) and was explicitly mentioned in [a later paper](https://scholar.google.com/scholar?hl=en&q=Market%20efficiency%20and%20the%20long-memory%20of%20supply%20and%20demand%3A%20Is%20price%20impact%20variable%20and%20permanent%20or%20fixed%20and%20temporary%3F&btnG=&as_sdt=1%2C5&as_sdtp=):
>
> X and Y thank Z, who as the anonymous referee was kind enough to
> point out the error (and later became non-anonymous).
>
>
>
so it is sure fine to answer truthfully that yes you did review, but only if you wish of course (and most likely if you have been helpful and the authors of the paper responsive).
Upvotes: 2 <issue_comment>username_8: How awkward! I once had someone tell me that he was the reviewer of a grant I submitted, which was rejected the year that he reviewed it and it was accepted the following year. Although he told me how happy he was that I resubmitted it the following year, it was very awkward.
Is it possible that the person asking the question is junior and doesn't know that this is inappropriate? I think that a polite response about peer review is appropriate, such as: "In the spirit of peer review, I don't like to discuss whether or not I have reviewed specific papers or grants. However, if there is something about your study that you would like to discuss? If so, I'd be happy to talk with you about it!"
This way you can avoid the question, but demonstrate interest in their work. If the person does not know that their question is inappropriate, you also politely inform them that they should ask people about who reviewed their work.
Upvotes: 2 <issue_comment>username_9: Edit after comments....
-----------------------
I suggest that in reply you would use the method of deflection - not answering directly.
You could say reply to the question with something like....
>
> I tend to find that if a reviewer wants to be identified they request one or more of their papers to be cited, which makes their identity known. If your reviewer didn't do this or come right out and tell you then they probably want to remain anonymous and I'm not sure I want to help you figure out who it was. Did you get a particularly nasty review?
>
>
>
So I am suggesting
* first, that you tell the person that if their reviewer did not reveal their identity then they probably want to remain anonymous.
* secondly, that you say you don't want to help them figure out who it was because if you say 'no, not me' then they will have a smaller group of people to hunt through
* thirdly, you ask if they got a really nasty review and if that is why they are asking.... this may lead on to a more general discussion about reviewing
Please note
-----------
I do *not* reveal my identity in reviews by asking for my papers to be cited--- but on several occasions i have seen reviews were people have asked for their papers to be cited.
Original brief reply
--------------------
Deflection....
You could saysomething like....
>
> I tend to find that if a reviewer wants to be identified they request one or more of their papers to be cited, which makes their identity known. If your reviewer didn't do this or come right out and tell you then they probably want to remain anonymous and I'm not sure I want to help you figure out who it was. Did you get a particularly nasty review?
>
>
>
Upvotes: 0 <issue_comment>username_10: "I believe the referee process works best when referees remain anonymous; as a result I neither ask nor answer this type of question."
Upvotes: 2 <issue_comment>username_11: Perhaps you should follow the example of <NAME> (known as the 'R' in the famous FLRW, or Friedmann-Lemaître-Robertson-Walker metric used in physical cosmology.)
He was the referee of the famous Einstein-Rosen paper, which was rejected by Physical Review, prompting Einstein never to publish in Physical Review again.
Einstein ignored the referee report, but months later, it seems, Robertson had a chance to talk to Einstein and may have helped convince him of the error of his ways. However, as far as we know, he never revealed to Einstein that he was the anonymous referee for Physical Review. It was not until 2005 I believe, long after the death of all participants, that Physical Review chose to disclose the referee's identity (<http://physicstoday.scitation.org/doi/full/10.1063/1.2117822>).
Upvotes: 3 <issue_comment>username_12: Why don't you say something like "a lot of people ask me that, and I don't tell anyone whether or not I did". And if you read their paper and liked it or had questions about it, you might as well tell them that, since it's an easy segue into a more productive conversation.
Upvotes: 0 <issue_comment>username_13: I suppose that in a year or two you review more papers than just one. Also I suppose that it must have been really good / really bad paper so you can remember it for a year or two.
>
> Honestly, I don't remember what I reviewed year ago.
>
>
>
If you **want** to discuss the article, you can as what article they are talking about and phrase your answers as "If I were the reviewer I'd suggest..."
Upvotes: 1
|
2017/03/31
| 563
| 2,485
|
<issue_start>username_0: [Twitter announced](https://twitter.com/Twitter/status/847479110616047616) it would begin distancing itself from the requirement that all tweets could only contain 140 characters by no longer counting some things – like media attachments or @ replies – towards the character count.
I am doing research in that area of social media analysis, and I have submitted a research paper to a conference in which I have mentioned in the introduction a little about the Twitter character count restriction (limited to 140 characters), which has since been updated to let users write more than that.
* What happens now when the reviewer/referee reads my input in the paper and sees that the regulation has been updated? Will they ask me to update it or simply accept or reject it?<issue_comment>username_1: Similar issues arise frequently in fields that change quickly due to political, legal or technological developments. In general, these changes don't affect the veracity of the study; they only affect its scope. So usually reviewers will ask you to address the change in the concluding section, perhaps through an informed speculation on how the change matters in the future. In other cases, where the change does not matter greatly (perhaps this is the case here), they will simply ask you to acknowledge it in a footnote or the like -- if they recognize it at all.
Upvotes: 5 <issue_comment>username_2: Technology and technology based applications keep changing all the time, and frequently so.
Research methodology requires that you document these changes as they happen and annotate all references with dates. For example, you may add information that can act as a sort of disclaimer: For example:
>
> N.B: As of March 30, 2017, Twitter discontinued its 140 characters
> limit for replies. At the time of submission of this paper, the 140 Characters limit was still in force.
>
>
>
Upvotes: 7 [selected_answer]<issue_comment>username_3: Does the character count affect the importance or relevance of your findings? You cannot help what the reviewer thinks, but a good reviewer should consider that although the technology has changed, the meaningfulness of your findings should be the same and potentially worthy of presentation at the conference. Also, for many conferences papers are distributed to volunteer reviewers and it is possible that whoever your paper is assigned to may not be all that familiar with Twitter anyway. Good luck!
Upvotes: 2
|
2017/03/31
| 924
| 3,872
|
<issue_start>username_0: Is it true that if you get PhD from an institution, then it becomes very hard to become a tenure track faculty position at the same institution?<issue_comment>username_1: The big problem here is that you were trained by your adviser, and your adviser is already a tenure-track professor at your own institution. It's very likely that from a departmental point of view you're just going to replicate your adviser's expertise.
Typically a department sees a new hire as a chance to incorporate new skills or new ideas so they can broaden what kinds of courses and research they can do.
I personally know of one person who was hired by their own PhD institution, and it just so happened that he was graduating as his adviser was leaving, so he took his adviser's role in the department.
Upvotes: 3 <issue_comment>username_2: In theory, anyway, academia is very much like human biology: inbreeding and incest ought to be avoided because drawing from such a small pool of genes and backgrounds is likely to cause problems. Mixing it up benefits any area of inquiry: people whose training was from a diversity of institutions/supervisors/influences bring in new ideas and frameworks, and can keep a line of research from unwittingly falling into a rut when it comes to the lines of analysis being used.
Sometimes Ph.D. graduates of an institution are brought in out of necessity. If a department wants to hire a second faculty member in a very, very small subfield, for instance, the pool of applicants is likely to include a lot of Ph.D. graduates who worked under the first. That's pretty inevitable. This is, to extend the analogy, as if the human population has been reduced to 10 or 20 people and most of them are relatives. If there's no other choice, marrying cousins and siblings has to be done.
But those cases are the exception, in my experience. Within my field there are 3 or 4 departments in North America that have become notorious between 2009 and the present day for heavily favoring their own Ph.D. graduates (to the point where if the shortlist gets leaked, you can predict who's going to get the position from that and the job talks have only a small effect). It makes sense that departments want to look out for their own, and it also makes sense that if a department already has a good sense of someone being impressive, that's difficult for an outsider to challenge. But to return to the analogy, it's like marrying a pair of siblings to each other because they're both looking for a partner and already know each other and get along. In both cases, people will talk. In both cases, a reputation will be earned that is not good. The last time one of the 3 or 4 departments in question was looking for a new tenure-track faculty member and someone from the department anonymously shared the shortlist with a bunch of mailing lists, one of my colleagues made an *extremely* cynical, almost mean-spirited prediction that made me shake my head at the time and then ended up being entirely accurate.
So yes, it is absolutely possible for someone to end up working at the institution where they did their Ph.D.; but unless there are extenuating circumstances, people are likely to think less of the department in question for hiring its own. It benefits every department not to default to someone they already like and someone whose ideas are comfortable and familiar to them.
Upvotes: 4 <issue_comment>username_3: It's certainly possible - at my PhD institution, at least two faculty members who got their degree have returned to tenure track positions, and at least one subsequently was tenured. As others have noted however, they both spent time at other institutions as well before returning.
Whether it is *likely* or not will very much depend on the policies of both the university and the department in question.
Upvotes: 1
|
2017/03/31
| 1,076
| 4,483
|
<issue_start>username_0: Assume a student in the United States who has access to Adderall despite not suffering from ADHD. Would it be ethical for them to take the drug during their studies?
Assume that the school is within the United States. When answering only the academic side of things should be taken into account – I am fully aware that taking a drug without a proper prescription is illegal.<issue_comment>username_1: It is illegal for anyone to take prescription medication not prescribed directly to them by an authorized medical professional. Universities in the US are not safe guarded from federal, state, and local laws. So if your school found out that you are abusing drugs, they can expel you. You will most likely go through a hearing process where you are offered due diligence.
If you are looking for the policy at a specific school, just visit the school's website and search for their Alcohol and Drug Policy.
Upvotes: 3 <issue_comment>username_2: Whether it's ethical or not depends on your own system of ethics. But, a few things that come to mind. Full disclosure: I've never used these kinds of drugs and I'm not a medical doctor.
1) Just like steroids are considered unfair sportsmanship, cognition drugs will be viewed as unfair in academics by people who are concerned about fairness.
2) Taking drugs to artificially enhance your performance doesn't help you in the long run. Do you really want to take drugs your entire life to maintain an artificial level of performance? Things like better study skills, time management, and prioritization will enhance your performance more effectively and will help you through your whole life.
3) Are you going to feel OK with yourself knowing that you didn't live your life on your own terms? Later in life you're never really going to know whether you could have met your challenges on your own.
4) [The people who don't want you to use adderall](http://americanaddictioncenters.org/adderall/long-term-effects/) will tell you that there are a range of serious health consequences for abusing prescription drugs, including long-term cardiovascular, neurological, and mental issues. Sacrificing your health for success is viewed as unethical by some systems of reason.
5) Finally, these drugs are referred to as "cognition-enhancers", but [the science is still out](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3489818/#__sec7title) on whether or not they really assist in anything beyond rote memorization in people without ADHD. Other researchers have suggested that drug use might lead to deficiencies like decreased creativity.
Upvotes: 3 <issue_comment>username_3: Academia is not a competition, neither is school. It's about education and knowledge. Neither comes and goes with a few pills.
Applying the anti-doping rules from sports is therefore nonsense. What applies are the common workplace anti-drug rules.
Upvotes: 5 [selected_answer]<issue_comment>username_4: I would point out that a major concern in academics is the *sustainability* of the practice.
I assume you are talking about undergraduate, but most academics are PhDs. This means (in my experience) we have seen numerous students, classmates, peers, and colleagues drop out of school or the industry, hurt themselves physically or mentally, or (very rarely) hurt others. We are always concerned about such things, they make us sad.
Use of Dextroamphetamine or other ADHD medications is not problem free - use of drugs that have side effects such as heart problems, paranoia, mood swings, and a strong tendency towards chemical dependence are not seen as sustainable - particularly for those who do not even have the disorder. Nor are the benefits clear cut (as others have pointed out). Students report themselves as 'feeling' like have learned more, but in actuality, it is not clear they are actually learning more.
As such, these people are engaging in a dangerous practice with unclear benefits. I see that the comparison to performance enhancing drugs are good for reasons other than you anticipated. It is not that such actions are "unsportsmanlike" which is nebulous in a learning environment. Rather, the costs are too great and the benefits so unclear that individuals undertaking such a risk is undesirable. Students hoping such substances work for them (when they don't even have the disorder) are too likely to self-destruct and this is not a great learning environment for those students or their peers.
Upvotes: 1
|
2017/03/31
| 1,916
| 8,211
|
<issue_start>username_0: I was recently asked to review a paper. I feel like there is a conflict of interest as (i) I frequently collaborate with one of the authors, (ii) I discussed the paper with the authors while they were working on it, and (iii) I am currently doing follow-up work (i.e. extending the work of the paper in question).
I told the program committee member who asked me to review the paper all of this. I was surprised when she responded saying that this is not an issue. Specifically, she responded that this is unavoidable as there is a small community of experts who all work together and that this just means I'm the most qualified expert to review the paper.
What should I do? Should I accept the PC member's assertion that this is OK? (I have been honest with her.) Or should I insist that I am not able to give a fair review?<issue_comment>username_1: I've occasionally been in a situation where I was asked to review something I thought would normally be a conflict. I was told it would be ok if I felt I could be unbiased. It sounds like you think you can't be unbiased... so just say no.
Upvotes: 4 <issue_comment>username_2: From an outsider's perspective, both choices are now ethically just. You've declared the potential conflict of interest and been assured that the authority in question doesn't see one ( hopefully you did this through a means which let you keep a record).
So now it pretty much comes down to whether giving the paper a fair review makes you feel uncomfortable. If it does, then get back to the committee **as soon as possible**, and tell them as clearly as possible that you can't do the review. Otherwise, review the paper, making full use of your deep understanding of the context of the work.
If you're wondering whether the committee member's statement is ethical, then consider whether you, as an obviously knowledgeable person in the field, could think of three other possible reviewers. Of course, if you can, then maybe one answer is to say no, but pass on those names instead.
Upvotes: 5 <issue_comment>username_3: **TL;DR:** I think (iii) is your strongest problem.
(i) and (ii) can be mitigated by the editor by having other reviewers not collaborating with the authors balance with your likely deeper expertise; this requires a legitimate judgement call.
However, as for (iii), separating the contribution of your own work from theirs under the influence of a review cuts into your own flesh. It is similar to be asked to review proposals for a funding call in which one submits oneself. One can never win: either one violating ethics by damaging other submissions, or one is generous in giving proper credit to others, thereby endangering one's own submission (kill one's own child). And it looks bad if you were on the committee, no matter what you do.
Your case is probably not quite as extreme, but requires careful handling. If you feel compromised by your current extending work on the topic, you probably should tend to decline.
Upvotes: 3 <issue_comment>username_4: The main requirement with a conflict of interest is that it must be **disclosed to the editors**.
Reviews aren't processed automatically, but by an editor. The editor is supposed to read your review - in particular if you indicate a conflict of interest - and may then decide whether your review is fair, or may need to be ignored.
So just do a very thorough review (in your case, probably a very much critically thinking review, because this is also a chance to reconsider foundations for your own work!) and in the confidential remarks, repeat your conflicts.
I have also reviewed papers where I had prior collaboration with one of the authors. It turned out the other reviewers liked the paper, but I found flaws in the proofs. My review "won", they had to fix the proof to get it accepted. **Sometimes, a reviewer with a conflict may be the most valuable, because he puts much more effort into the review.** The editors just need to *know* and pay attention to potential bias.
Personally, I would *not* do the review because you are already working on a follow-up and seem to be quite close to the authors. This most likely makes you too "blind" for the actual problems of the approach. And in the worst case, imagine that you find a fatal flaw when doing the review, which may even kill your own current work? Imagine you find some weaknesses, they need to resubmit, and your own work cannot get published until theirs is, so you end up delaying your own work... so by doing a good review, you may hurt your own research? It's probably better to have someone else do the review - or they missed some weakness, or if they delay publication, neither is your fault. Also, will you be able to remain anonymous to the authors? Imagine you find a weakness, and 'kill' their submission, and you somehow give away you reviewed... will they take this well?
Upvotes: 4 <issue_comment>username_5: I believe that the reason the committee gave you for giving you the go-ahead anyway (small community of experts and therefore any reviewer would potentially have a conflict of interest) is a common one in this situation. However, if you believe that you are unable to give an unbiased review, I suggest recommending a new reviewer to the committee. This will alleviate the burden of the committee to find a new reviewer and they will be more accepting of you declining to review.
Upvotes: 2 <issue_comment>username_6: **TL;DR: You mustn't review the paper; escalate to force the PC to devise an alternative course of action.**
It *would* be unethical for you to review this paper. If your research community is so small that the only person to be found who can review a paper meets your conditions (i) - (iii) (especially (ii) and (iii) ) - then it is not a field in which proper peer review is really possible at all. Or at least, some of the field isn't.
For such (sub)fields, the first thing that has to go is anonymity (which seems to be in effect in your case): The authors would actually know it is you who wrote the review (or perhaps another similarly-interest-conflicted member of the PC). The second thing is the pretense of publication selection by fair peer review. There should be other arrangements, either simply including in all papers into a non-reviewed section (since there should really be very few of them), or enacting some kind of collective decision-making process regarding which papers make the cut, that involves most/all stakeholders within this small community.
But I doubt this is actually the case. I'm guessing the PC chair was brushing you off to avoid more hassle - as I'm sure she's swamped with other work. I'm almost certain there are people - perhaps not on the PC - which can review the paper and are less conflicted. The PC needs to reach out to them to ask for help with the review. And a last resort for the PC (again, assuming she doesn't face the same situation for many papers) is to accept this paper without review: The committee accepts, but notifies authors of its failure in the effort to find an impartial reviewer, and the reason for this failure. Such an action should probably be taken after consulting all PC members to form a consensus.
Now, there are other ways to handle this situation, I suppose - those were just a few alternatives; but what the PC chair *cannot* and *must not* do is brush conflicts of interests under the rug. And the same goes for you: Her authorization does not justify your actions (it merely partially shifts responsibility for a misdeed).
Upvotes: 1 <issue_comment>username_7: As far as I can tell, there are two potential problems if you review this paper. (1) You might be biased. (2) People might think you're biased (even if you really aren't). The program committee member's saying it's not an issue indicates to me that (2) won't be a problem. But (1) is something only you can decide about. If I were in your position, I would first reflect very carefully on whether I can write a completely fair report. If so, I'd go ahead and review the paper. If not, I'd tell the PC member that I can't review the paper because I'm not confident that I wouldn't be biased.
Upvotes: 1
|
2017/03/31
| 546
| 2,246
|
<issue_start>username_0: I recently verified a degree, and the university reported that the degree was awarded five years after attendance. In other words, the student attended from 1995 to 1998 but the degree was not awarded until 2003, five years later. Why would there be a gap between the time the student attended and the time the Ph.D. was awarded?
(Note that the years above are fictitious and the actual years were at the time of the war in Vietnam so it is possible that some kind of military service may have been involved.)<issue_comment>username_1: In many universities, attendance (in the sense of registering for courses and/or paying tuition) is not required for Ph.D. students once they have completed the coursework and residency requirements for the Ph.D. After thus period, the Ph.D. student can work independently with minimal (or no) supervision from the Ph.D. advisor. When the Ph.D. thesis has been written, it is usually submitted to the advisor who does the first evaluation as to whether the research is of adequate quality, and then to the Ph.D. committee (examiners), the thesis defense is scheduled. After the defense has been successfully completed, the thesis (which often undergoes revision based on the Ph.D. committee's comments and questions that arose during the thesis defense) is submitted to the University and the degree granted at the next commencement/convocation etc. During this last period that begins with the thesis being submitted to the Ph.D. advisor, the Ph.D. student might need to be registered at the university and possibly have to pay some tuition (usually not the entire fee for a full term), but this is usually not regarded as *attending the university* or listed as such on CVs and job applications. In short,
it is quite possible to have a gap between the last date of attendance and the Ph.D. degree award date.
Upvotes: 3 <issue_comment>username_2: To add an alternative answer: until recently, Oxford did not allocate graduation dates, it was up to students to sign up for a graduation ceremony, to graduate either in person or in absentia. For various reasons, there are a small number of people who didn't officially graduate for many years after passing their course.
Upvotes: 1
|
2017/04/01
| 355
| 1,417
|
<issue_start>username_0: I'm not a US citizen, and I was accepted recently for an MS Degree in CS. After asking the Graduate Office, I was told so far they don't have a scholarship for me (only 15 out of the 115 accepted got one).
I'm proceeding with the papers and such, since if someone drops, it will pass the scholarship for the next one, but due to currency exchange and the high price of US education, I can't go without the funding. Is there any way I can prove myself now or is it better to refuse the offer right now?<issue_comment>username_1: It seems to be a Hypothetical question. However, my advice is to but for some financial aids to study if you really believe it's worth doing.
Upvotes: -1 <issue_comment>username_2: Ask if you are eligible for student loan.
Scholarships are often not available for undergraduate and Master's degree. It is extremely common in the US (and UK) that students borrow money to pay tuition fee and stipend. My former flatmate didn't paid all the loan until he was 28 years old (his father paid his tuition fee, he only borrowed for stipend). You can consider this as an investment if you believe the degree could bring you a well-paid job in the future.
On the other hand, you don't have to go the US to do a Master's. Many EU countries, such as Germany, provide free education for foreigners, and many provide scholarships based on the household income.
Upvotes: 2
|
2017/04/01
| 503
| 2,275
|
<issue_start>username_0: Despite the fact that most of the world is now connected with high-speed fiber and even phones can record 4k video, it seems to me that there is a large focus on offline events. Universities invite guest speakers, hold seminars, organize conferences, etc, and most of the content is not even posted online afterwards. Often the events themselves don't include anything more than the speakers doing their presentations, answering a few simple questions and leaving shortly afterwards - all of which could be done on Skype or a myriad of other platforms.
So why is there still such a strong focus on physical presence?<issue_comment>username_1: Conferences are not organised for the talks alone. They are more like the academic equivalent of a pool party or a weekend football league. Although on the face of it the purpose might look like spending time in the pool or playing football, people really go there to meet other people.
As a PhD student it took me a while to realise this. Most people attend conferences to get face to face time with other people in thier field, and often the talks act as adverts for collaboration. You'll notice that there are often conference dinners and mixers designed precisely for meeting others.
Every time I've heard someone talk in earnest about what they liked about a conference, it's always that they managed to catch up with a colleague or similar.
Upvotes: 6 [selected_answer]<issue_comment>username_2: I agree with Nathanael's statement. Part of developing a community of scholars is networking. While online activities open new opportunities for networking (for instance, cheaply bringing together people from different regions on a shared platform) there are significant limitations to networking only online.
As researchers, we conduct a lot of our work in isolation - even when we collaborate with others, it is often filled with internet exchanges rather than in-person contact. The in-person nature of conferences and guest speakers gives greater opportunity to have high-quality interactions.
I do agree that it would be helpful if universities and organizations provided more of their organized content online, but only in addition to in-person interactions, not in place of them.
Upvotes: 3
|
2017/04/01
| 1,009
| 3,851
|
<issue_start>username_0: I would like to know what license needs to be chosen in arXiv for a paper that is to be sent to an Elsevier journal. I have read the terms of both arXiv and Elsevier attentively, but I'm still in doubt.
arXiv has the following choices for licenses (from [their website](https://arxiv.org/help/license)):
>
> * grant arXiv.org a non-exclusive and irrevocable license to distribute the article, and certify that he/she has the right to grant
> this license;
> * certify that the work is available under one of the following Creative Commons licenses and that he/she has the right to assign this
> license:
> + Creative Commons Attribution license (CC BY 4.0)
> + Creative Commons Attribution-ShareAlike license (CC BY-SA 4.0)
> + Creative Commons Attribution-Noncommercial-ShareAlike license (CC BY-NC-SA 4.0);
> * or dedicate the work to the public domain by associating the Creative Commons Public Domain Dedication (CC0 1.0) with the
> submission.
>
>
>
On the other hand, Elsevier explicitly allows to publish accepted manuscripts in arXiv, provided it's "updating a preprint", not creating a new one (a really strange requirement in my view, whose motivation I've never understood, but that is [the subject of a different question](https://academia.stackexchange.com/questions/58222/arxiving-accepted-manuscript-after-publication-in-elsevier-ieee-etc)) and provided that this is done with a CC-BY-NC-ND license, which is none of the above. [From their website](https://www.elsevier.com/about/our-business/policies/sharing):
>
> Authors can share their accepted manuscript:
>
>
> Immediately (...) by updating a preprint in arXiv or RePEc with the
> accepted manuscript (...) In all cases accepted manuscripts should
> (...) bear a CC-BY-NC-ND license.
>
>
>
How are those two things reconciled, as arXiv doesn't have the option of using a CC-BY-NC-ND license? There should be a way, as otherwise the mentions of arXiv in the Elsevier policy would be totally pointless - allowing you explicitly to publish on a site with a license that that site won't allow! Is it legal to choose "grant arXiv.org a non-exclusive and irrevocable license" and upload the manuscript with a CC-BY-NC-ND license? Would this mean that the first license is for arXiv and the second license is for the rest of the world? This is the only solution to the puzzle I can think of, but I don't know if it stands (it seems that one needs a degree in law to understand these kind of copyright conditions...)<issue_comment>username_1: The top option, allowing only the arXiv to distribute what has been submitted to them, is more restrictive than any of the Creative Commons licences, as they are designed to allow others to use the work in some form. That one would therefore be within the rules stated by Elsevier.
Upvotes: 3 <issue_comment>username_2: How to add another Creative Commons license to an arXiv paper:
>
> We currently support three of the Creative Commons licenses. If you wish to use a different CC license, then select arXiv's non-exclusive license to distribute in the arXiv submission process and indicate the desired Creative Commons license in the actual article.
>
>
>
From the [arXiv webpage about licenses](https://arxiv.org/help/license).
How to indicate the desired Creative Commons license according to Elsevier:
>
> Elsevier requires authors posting their accepted manuscript to attach a non-commercial Creative Commons user license (CC-BY-NC-ND). This is easy to do. On your accepted manuscript add the following to the title page, copyright information page, or header /footer: © YEAR. Licensed under the Creative Commons [insert license details and URL].
>
>
>
From Elsevier's [sharing webpage](https://www.elsevier.com/about/our-business/policies/sharing).
Upvotes: 5 [selected_answer]
|
2017/04/01
| 535
| 2,252
|
<issue_start>username_0: What kind of permissions do I need to obtain from the developer of an app if I am planning to use it in a study?
The study primarily aims to look at how different students learn mobile games differently, and what factors determines their score.
Since it is relevant to the study, we would also have to analyse the game itself, quantify how the difficulty evolves over time, and what amount of randomness is involved in each gameplay (to get somehing like a *stochastic index*).
1. So, in case this ever goes to publication, would **mentioning the app-name be necessary,** since its characterisation would already be a part of it?
2. Do we need to **obtain permission before even using it**, since it is already a free app in the play store? Doesn't its free status give us permission to use it in a study (at least an unpublished one)?
There are no commercial interests in either of the cases. I am not sure if it is okay to mention the name here too. Sorry if I seem to be over-thinking but I am new to the world of academia! Any guidance is appreciated. (Unsure of the tags too)<issue_comment>username_1: I don't think you need permission, nor do I think there should be any ethical issues in regards to the application. So long as:
* you are not decompiling,
* you are not going to cause it any undue criticism or show the application in bad light.
* you don't give the appearance of association with the developers if it isn't the case
* you are also not going to compete with the application and that,
* you don't breach copyright by using logos etc unnecessarily i.e. outside fair use.
The people in the study and their use of the application will however have some potential for ethical issues.
Upvotes: 1 <issue_comment>username_2: 1. Yes, of course you should mention the name of the app. You have to provide details of your methodology. You should also cite the app (I see this commonly done by citing the app store page for it, but maybe your field has a different convention).
2. I have never heard of a researcher obtaining permission to run a study involving particular software. I have led or collaborated on about ten human-subjects studies involving a variety of free and commercial software.
Upvotes: 2
|
2017/04/02
| 1,707
| 7,289
|
<issue_start>username_0: So I am currently a Master's Student in an earth sciences field, looking to get into the PhD program at my university, provided that there is funding. My current adviser likely won't have funding for my PhD. Recently, my adviser told me about an opportunity with a different adviser, working on a topic in life sciences with an influence of earth science, but one which I may be qualified for. If I do this, is it likely that I can find jobs/postdocs in earth science, where my true interest lies, or would I be irreversibly altering my career path for life science?<issue_comment>username_1: It doesn't seem you have a choice, does it? Your situation is
* life science: no PhD
* earth science: possible PhD
Unless you have other options, this question is meaningless. Because if you want to do a PhD, you have to do earth science anyway, even if it means your career is irreversible.
---
My suggestion is you should apply for more PhD programs in life science, do not limit yourself to your current university. Applying for a PhD takes time, I heard that it takes years in some cases.
---
However, if you end up doing a PhD in earth science, remember that:
* As a Master's student, it maybe too early to decide what you like. People often like what they understand the most, you may like earth science if you dive into it.
* <NAME> had changed subject 5 times before getting the first tenured position.
Upvotes: -1 <issue_comment>username_2: @BarocliniCplusplus , I can totally relate to your problem because I have been faced with a problem very similar to yours myself. When I was in my masters program I had research interests of my own and I managed to prepare an entire research grant proposal even in my masters in a field which was both promising and underexplored. As we use to say in my country "on paper" my proposal seemed like a very unique idea and had great potential to answer fundamental questions about the entire field I graduated in (I'm a graduate in life sciences), if not for all science in general. However, when I had to begin the actual work to build up my PhD thesis and work on the topic I choose I started to face a great deal of problems-both in the "pure" scientific sense as ever more and more research was needed to expand my initially very specific topic and in terms of funding for the experiments I was proposing and finding proper guidance, so in one point in time my initially "top score" expectations turned out to "bog into" an ever more difficult and more difficult challenges-both as scientific problems to solve and the inability to cope with the funding at hand.
It got so hard that I was forced to quit and abandon my research interests because there wasn't anyone in my country who was willing to fund such research even if the prospects were mind-boggling. I wish to give you this as an example as to what can happen if you just "pursue your dreams" and hop into something you know very well and have everything you need to expect you can excel there. Sometimes life just doesn't work like that no matter how good you are and how great are your ideas. If not for the lack of ideas people can stop you for the lack of funds if they dismiss your qualifications. And this happens all too easily. For example, I know for at least two other students in different countries working in my field or similar to it which were "cut down" not for the lack of ideas or prospects but simply because their peers felt it would be better to spend the cash somewhere else.
What I would suggest you is to find some good program you can get in even if it's only "remotely" related to your interest and to get a hold on some field even if it isn't related to your ideas per se. Time will pass and you would come to understand it's the best option right now. If you **risk** it all because you know your field (I assume you have already reach an expert's status in it) I can tell you from personal experience the chances are higher to drop out, than to actually succeed. And if you drop out you might have **very hard time getting back in**. It was almost 7 years for me and I still am not able to get back on the research path. I decided to answer you like that because I actually have some personal experience with trying to pursue your own agenda because you have simply grown so knowledgeable that you may know your field better even than your professors, but, please, **believe me**, knowledge doesn't always "count in" when the actual research in the PhD starts and there are zillions things that can go wrong any time no matter how much you know it. It's just better to be a part of a team of someone who has more experience than you do and to hope his/her advice will get you through the program.
So, my advice derived from personal experience is that you should enroll in the program you know has funding and prospects even is it isn't what you have hoped for rather than getting "stucked" in a field you like but obviously has not enough funds. Even if you turn out to be live scientists, rather than earth scientists it would be better than being no scientist at all! I would advice you that altering your career path is better than not having one at all which could happen if you try to "pressure" (if you even can) the people around you to pursue what you are really good at. Please, take this personal story into consideration.
P.S. In the end I would like to cite the popular saying: "If life offers you lemons, make lemonade." I know it may not seem to be very good choice as you see it now but it's better than nothing which may be what you will get if you try to pursue your own research interests. But, ultimately, it's **your life**-take a deep breath in and think **thoroughly** how you want to end up living it risking for a PhD in earth science or getting the "safe heaven" of life science. Nobody but **YOU** knows the answer. I think this is the best advice anyone can give you.
Upvotes: -1 <issue_comment>username_3: It depends on the exact topic and what guidance the new research group will offer you. There's plenty of opportunities to develop expertise in life sciences during PhD. Your options after may not be as limited as you perceive. There is a lot of demand for collaborative and interdisciplinary researchers both in academia and industry.
I'm doing an interdisciplinary PhD between stats and genetics. While on paper you do have to enrol in one course and will be assessed on it. In practice, you can lean on the other field. For instance, with the methods I use. My research questions are biological but my methods differ from lab trained biologists.
You ought to still have room to develop your earth science skills in this way, particularly if your new research group collaborates with people in your field that can advise you on those aspects. You should have a wider range of an advisory committee and peers to assist you with work in both disciplines. A PhD is also an independent project, especially near the end, you should be able to build up confidence ants negotiate using some of your current skills if they tie in the research topic cohesively.
In short. Consider it, it may not be a one way track to biology. It's also not too late to apply to other labs.
Upvotes: 1
|
2017/04/02
| 2,824
| 10,682
|
<issue_start>username_0: In the US, tenure-track (TT) positions seem a common and natural step in the progression towards an academic career and quite "stable" state-wise in terms of what one might expect from them, in that if nothing goes too wrongly, after the fixed X years in a TT position one can reasonably expect to get tenure (I've heard of horror stories of denied tenure from certain R1 universities, but I'm more concerned about the generic case here).
Now, I'm in the early stages of my academic career (I'm soon to start my first postdoc) and, more relevantly for the purposes of this question, I'm based in Europe and, unless I have absolutely no other options, I intend to remain in Europe in the future. My understanding of what the "natural" steps to take after the postdoc phase in Europe is, admittedly, pretty muddy and quite scattered.
My question(s) is (are):
>
>
> >
> > Is there an equivalent (step-wise in the progression towards an academic career) in Europe of the tenure-track positions in the US? If so, is this equivalent step "stable" country-wise, or does it change considerably depending on the country? More generally, what is the "natural" progression in Europe after the postdoc phase?
> >
> >
> >
>
>
>
Sorry for the many questions, but most of the advice I usually find is very US-centric.
EDIT: To make the question a bit more specific, I'm particularly interested in France, UK, Italy, and Germany. Also, I'm trying to pursue an academic career *in mathematics*, so I'm also interested in those aspects specific to the discipline, if there are any such.<issue_comment>username_1: In Germany, the typical career progression after the Ph.D. is either a [*Habilitation*](https://de.wikipedia.org/wiki/Habilitation) or a [*Juniorprofessur*](https://de.wikipedia.org/wiki/Juniorprofessur), after which one can apply to full professorships.
The *Habilitation* is slowly decreasing in prevalence, with different fields being more or less quick about this process. It is still more prevalent in the humanities, where it often still is "the second book", after the Ph.D. dissertation. In the sciences and mathematics, AFAIK it is more common to collect a number of publications and submit these. This would typically be done in a standard postdoc position.
In contrast, the *Juniorprofessur* is modeled after the US tenure track. You are supposed to start it right after the Ph.D., or at least not too long after. A *Juniorprofessor* has pretty much the same rights and responsibilities as a full professor. After two positive evaluations, you are deemed *berufbar* and can apply to full professorships. Some *Juniorprofessuren* are actually tenure track, but whether this option even exists is up to the German *Bundesländer*. Conversely, this means that some - probably most - *Juniorprofessuren* are *not* tenure track, but time limited. You can tell a *Juniorprofessur* by its pay scale: W1.
In addition, there are a few other paths to a full professorship, like the [*Emmy Noether-Programm*](https://de.wikipedia.org/wiki/Emmy_Noether-Programm).
As above, once you have completed your Habilitation or the second positive evaluation of your *Juniorprofessur*, you can apply to full professorships. Typically, if you have a larger group or more responsibilities, you'll be paid more (W3 compared to W2), but there are no fundamental differences between W2 and W3 professorships, it's mainly a question of your pay. Normally, W2 and W3 professors are tenured *Beamte*. Some *Bundesländer* will not give you *Beamter* status if you are too old when you first reach eligibility. Then you are still a member of the public service and essentially unfireable, but there are differences in health insurance and social security.
If you don't aim for "real" universities, but for [*Fachhochschulen*](https://de.wikipedia.org/wiki/Fachhochschule_(Deutschland)), you don't need a *Habilitation* or a *Juniorprofessur*. Instead, you will be expected to bring a Ph.D. and five years experience outside academia. AFAIK, most *Fachhochschulprofessoren* are not tenured, although their contracts are typically not time-limited and essentially for life.
These earlier questions may be helpful: [Do professors in Germany have other payment than their standard salary?](https://academia.stackexchange.com/q/62240/4140) and [Is it possible to write a proposal to get a junior professor position in Germany, when existing openings seem far from my research interests?](https://academia.stackexchange.com/q/62504/4140)
Upvotes: 4 <issue_comment>username_2: In the UK there are four levels of permanent academic employment: lecturer, senior lecturer, reader, professor. These four positions between them are roughly equivalent to the three US positions of assistant professor, associate professor, professor. These days lecturer is often a probationary position for the first few years, but higher positions are permanent. Promotion to senior lecturer does not follow automatically upon being confirmed as a permanent member of staff. Apart from this practice of probationary appointment there is no formal tenure system. In my experience (which is limited to mathematics departments) it is quite unusual for probationary appointments to not become permanent.
In my understanding, not very long ago senior lecturer and reader were seen as roughly equivalent, but senior lecturer was seen as more heavily emphasising teaching and administration whereas reader was seen as a position which more strongly emphasised research. However, there is a developing tendency to see the two positions as having essentially the same nature, with reader simply being more senior.
Some British universities are in the process of adopting the three-tiered U.S. nomenclature, but such universities remain a minority at present.
Upvotes: 5 <issue_comment>username_3: In French universities, both "maître de conférence" (≈ assistant/associate professor) and "professeur" (full professor) permanent position levels are effectively tenured: they are *fonctionnaires*, i.e., have government employee status, and as such, enjoy a very high degree of job protection. The same holds for "chargé de recherches" and "directeur de recherches", which are the corresponding positions in pure research organizations (like the CNRS), with no teaching duties. Things might be slightly different in other French educational institutions, like the *grandes écoles*.
There are also a number of non-permanent positions like "ATER", "moniteur" or "vacataire", with contracts for a specific time.
Upvotes: 4 <issue_comment>username_4: In Italy the tenure-track equivalent position is called RTD-B. RTD-B is an acronym for Ricercatore a Tempo Determinato tipo B. That is to say, fixed-term researcher of type B. The term is three-years, by the end of it you are supposed to have passed a national habilitation based on a few key metrics, and then become Associate Professor (Profess<NAME>).
The "Ricercatore" title is the italian nominal equivalent of a USA Assistant Professor. I specify nominal because, contrarily to USA positions, you are not expected to start a group of your own, your are not given any research money to do so, you are not expected to supervise BS, MS, or PhD students. You do are expected to perform high-class research; the independence of your research will largely depend on the ability to attract funds on your own. In that regard, your moral standing is more like a Research Scientist in USA, unless you are capable of attracting large national grants, or large AND prestigious European grants. With those funds, you will be able to gather all resources you would need for fully independent research.
You will be expected to teach, and the teaching load will vary a bit depending on the institution that hires you. Typically one course per year, that amounts to 50 to 80 hours of formal lectures, to a typically a large number of students, grading exams scattered throughout the year.
One thing to keep in mind is the selection process. In the USA, an institution will receive several applications, make a short list, and invite the shortlisted candidates for one or two full-days interview. In Italy, institutions nowadays use shortlisting as well, but interviews last only 20 minutes or so. Also, recommendation letters are not needed in Italy, although some places are starting to ask them.
Upvotes: 2 <issue_comment>username_5: The Netherlands have 3 different grants, specifically the Veni - Vidi - Vici level of grants provided by the NWO (Nederlandse Organisatie voor Wetenschappelijk Onderzoek). The three levels work as follows (info taken from [here](http://www.nwo.nl/en/funding/our-funding-instruments/nwo/innovational-research-incentives-scheme/index.html) on 4-4-2017):
1. Veni - For researchers who have recently obtained their PhD - Maximum of € 250.000.
2. Vidi - For researchers who have gained several years of research experience after their PhD - Maximum of € 800.000.
3. Vici - For senior researchers who have demonstrated an ability to develop their own line of research - Maximum of € 1.500.000.
None of these grants comes with a permament position perse as all PIs are expected to fund themselves (by acquiring additional funding from various (e.g. EU) sources). However, most researchers that become full professor have used one (or more) of the above mentioned grants while "settling down" in a research institution.
Upvotes: 1 <issue_comment>username_6: There is now a trend towards offering assistant-professor-level positions explicitly with tenure track. Recent examples can be found for [Sweden](https://scholarshipdb.net/jobs-in-Sweden/Research-Fellow-Assistant-Professor-In-Ai-And-Machine-Learning-Link-ping-University=gkuJxB7e6RGUWgAlkGUTnw.html), [Denmark](https://akatech.tech/announcement,a267.html), the [Netherlands](https://akatech.tech/announcement,a460.html), [France](https://polytechnicpositions.com/assistant-professor-tenure-track-in-energetics,i2262.html), [Luxembourg](https://polytechnicpositions.com/assistant-professor-tenure-track-in-computer-science-space-informatics-space-robotics-and-automation,i2675.html) and [Germany](https://universitoxy.com/assistenzprofessur-w2-tenure-track-fuer-biologie-der-eukaryotischen-gen-und-genomregulation,i7393.html). This trend seems to be supported by some political initiatives. For example, Germany recently started a [program to establish 1000 tenure-track professorships](https://www.bmbf.de/de/tenure-track-532-neue-professuren-bewilligt-9614.html) (the so-called "Wankaprofessuren", after the former minister of education <NAME>).
Upvotes: 2
|
2017/04/02
| 872
| 3,628
|
<issue_start>username_0: Currently I am finishing my masters degree program in Life Sciences / Biotechnology by doing an industrial internship. After having obtained my degree, I would definitely like to do a PhD if I can find an interesting project in a location where I would also see myself living for a couple of years.
Another, non-career related ambition of mine is to go travelling for a couple of months. Now most would say that the time in between obtaining your degree and starting a PhD would be perfect for this, however, at the moment I lack the money to be able to make such a travel.
One option that came to mind would be to start a paid PhD program, which would me allow to save up some money to start a travel. Now the question is: would it be possible to take months off from my PhD work in the middle of the program? Note, I am not looking to spare up free days and use them all on this travel period, I would intend to take a real period off (and not be paid as a consequence).
I do realise that if this is possible at all it will be very specific and varying between different programs/supervisors/institutes, but I am looking for the general view in academia on this topic.
Edit: the geographical location where the PhD program would be done, is most probably Europe.<issue_comment>username_1: This is almost completely dependent on two things:
* Your departmental policies and program structure
* Your advisor
In the US, most programs (all of the ones I'm personally aware of) work on a 9-month academic year schedule, so there's generally 2-3 months a year left over. This is a time that is variously spent on research, conferences, teaching appointments, or internships, so you generally want to be on the same page as your advisor (and department) on your plans.
Not all doctoral programs run this way, though, so if this is important to you, you should ask current students, potential advisors, and administrators in prospective programs how this generally goes.
As to affordability, that's really up to you in how cheap you want to (and are able to) live, choosing programs that are in affordable areas, and availability/pay of internships in your field.
Upvotes: 3 <issue_comment>username_2: In at least some (and perhaps many) European countries you're an employee for the duration of your Phd. This makes it quite hard to get unpaid leave. Sometimes there are allowances for a sabbatical, but this involves saving holidays over a few years, and it would be paid (in the Netherlands and Germany at least).
I don't know how long you want to leave, but in some countries you'd have up to 30 holidays a year. If you can arrange to save them for the next year, and depending on holidays, you'd be able to travel for 10-12 weeks. Paid.
Whether or not this is accepted by your supervisor is an open question, but supervisors in Europe are very used to Phd students leaving for 3-4 weeks periods every year. I can imagine that extending this period is not such a shock. I would expect you to have more trouble to convince your supervisor you can be productive for almost 2 years without any additional holidays.
Upvotes: 3 <issue_comment>username_3: Every program I have been a part of (albeit only US) calls this a leave of absence. There is a formal process that is typically well established. You can probably find the particulars on their website. I googled a "leave of absence (school name)", and the first link was it for the three schools I tried.
Their may be informal consequence with regards to your adviser and colleagues, but I suppose that is a different question.
Upvotes: 1
|
2017/04/02
| 1,152
| 4,958
|
<issue_start>username_0: I am a university marker TA. The instructor requires TAs to meet students if they have marking issues. But I found most students just come to meet me for better grades, not to ask questions, and some are tough to handle; they just insist they should get better grades.
I feel that is annoying. Is it a better idea to handle the cases by email, and let them talk to the instructor if they have further questions?<issue_comment>username_1: Part of the rite of passage of being a graduate student is having to deal with insistent students in person.
If students ask for more marks, you can **firmly** but politely state that marks are not subject to negotiation, but that you'll gladly discuss the contents of the answer.
Aside: learn early to be firm for otherwise word will spread you are soft and you will invite more headaches. Being firm can lead to awkward silences as you may rapidly reach a point when there is nothing more to say. If you reach a dead end you can always close by **firmly** but politely ask if there are other topics to discuss.
Upvotes: 5 <issue_comment>username_2: While the situation is indeed annoying, think of it as a training opportunity: by dealing with these insistent students you are getting really good practice in standing up for yourself in a professional environment (moreover, in a low stakes situation where nothing much other than your ego is at stake, and where you are the person in authority, which makes things a lot easier). The skills you will develop through this practice will be really valuable for you later on in life when the stakes can be much higher -- say, when you are asking for a promotion or a raise, or arguing an important technical point with a colleague or superior. So, the next time you hold office hours think of it as "I am going to my `assertiveness class'" instead of "I am going to be attacked by hostile students". This subtle change in perspective could end up making the experience a lot more tolerable, even fun.
Upvotes: 4 <issue_comment>username_3: I strongly recommend publishing and adopting the following policies:
* Publish detailed grading rubrics describing how you assign partial credit. Follow them slavishly when handling regrade requests.
* Regrade requests must be submitted *in writing* at most X days after the graded work has been returned. Each request must include a brief written explanation of why the grade is incorrect. (Note: *not* "...why they deserve more points.")
* While you are happy to discuss answers and even grades in person, you will not change any grades in the students' presence. Or maybe even within 24 hours of meeting with the student requesting the regrade. *Maybe* add an exemption for arithmetic / recording errors.
* Your responses to regrade requests are final. Further appeals will be automatically forwarded to the instructor. (If the instructor bounces them back to you, that means they're happy with your earlier decision.)
Finally, ask the instructor to announce these policies on the course web page. Better yet, **ask the instructor to forbid you** to change grades in the students' presence, or to regrade the same submission more than once.
As for demands from students for higher grades: Look them straight in the eye and tell them that giving them anything less than the fair and honest evaluation that they (or their parents) have paid for would be grossly unfair, not only to other students or future employers, but *to the students themselves*. Refer all complaints about your grading rubric to the instructor. Unless they want to discuss the substance of their work, gently but firmly kick them out of your office.
Upvotes: 3 <issue_comment>username_4: I always went with "sorry, but I cannot give you marks you did not earn".
If they gave me a hard time (and I had a few... "well if you would just TRY to find somewhere to give me extra marks! I can't believe you won't try to help me pass!")
I just say sorry, but it isn't my responsibility to do the work. It's yours, and I can only give you marks where you've earned them.
Upvotes: 3 <issue_comment>username_5: These headaches are a reason us educators have a job.
Depending on the person and the work I like to look at their exams again and explain why they lost marks and where they could do better, in the rare case that this doesn't do them justice I make it clear that on this exam, today, right now they do not deserve the marks they're asking for. I'll typically throw a few tips at them and tell them to prepare better so they can make up the marks next time.
I like to be open-minded when it comes to a kid's marks because I know a lot of incredibly bright students who make silly and sad mistakes I know they could've avoided. Totally shutting them out and saying things like "marks are not up for negotiation" can really destroy the self-esteem of these students and make them lose hope throughout your course.
Upvotes: 2
|
2017/04/03
| 1,126
| 4,758
|
<issue_start>username_0: I am reading a paper on arXiv
However the paper is completely in Russian:
<https://arxiv.org/ftp/arxiv/papers/1701/1701.02595.pdf>
Is there any free online service I can use to translate this paper to English?<issue_comment>username_1: Part of the rite of passage of being a graduate student is having to deal with insistent students in person.
If students ask for more marks, you can **firmly** but politely state that marks are not subject to negotiation, but that you'll gladly discuss the contents of the answer.
Aside: learn early to be firm for otherwise word will spread you are soft and you will invite more headaches. Being firm can lead to awkward silences as you may rapidly reach a point when there is nothing more to say. If you reach a dead end you can always close by **firmly** but politely ask if there are other topics to discuss.
Upvotes: 5 <issue_comment>username_2: While the situation is indeed annoying, think of it as a training opportunity: by dealing with these insistent students you are getting really good practice in standing up for yourself in a professional environment (moreover, in a low stakes situation where nothing much other than your ego is at stake, and where you are the person in authority, which makes things a lot easier). The skills you will develop through this practice will be really valuable for you later on in life when the stakes can be much higher -- say, when you are asking for a promotion or a raise, or arguing an important technical point with a colleague or superior. So, the next time you hold office hours think of it as "I am going to my `assertiveness class'" instead of "I am going to be attacked by hostile students". This subtle change in perspective could end up making the experience a lot more tolerable, even fun.
Upvotes: 4 <issue_comment>username_3: I strongly recommend publishing and adopting the following policies:
* Publish detailed grading rubrics describing how you assign partial credit. Follow them slavishly when handling regrade requests.
* Regrade requests must be submitted *in writing* at most X days after the graded work has been returned. Each request must include a brief written explanation of why the grade is incorrect. (Note: *not* "...why they deserve more points.")
* While you are happy to discuss answers and even grades in person, you will not change any grades in the students' presence. Or maybe even within 24 hours of meeting with the student requesting the regrade. *Maybe* add an exemption for arithmetic / recording errors.
* Your responses to regrade requests are final. Further appeals will be automatically forwarded to the instructor. (If the instructor bounces them back to you, that means they're happy with your earlier decision.)
Finally, ask the instructor to announce these policies on the course web page. Better yet, **ask the instructor to forbid you** to change grades in the students' presence, or to regrade the same submission more than once.
As for demands from students for higher grades: Look them straight in the eye and tell them that giving them anything less than the fair and honest evaluation that they (or their parents) have paid for would be grossly unfair, not only to other students or future employers, but *to the students themselves*. Refer all complaints about your grading rubric to the instructor. Unless they want to discuss the substance of their work, gently but firmly kick them out of your office.
Upvotes: 3 <issue_comment>username_4: I always went with "sorry, but I cannot give you marks you did not earn".
If they gave me a hard time (and I had a few... "well if you would just TRY to find somewhere to give me extra marks! I can't believe you won't try to help me pass!")
I just say sorry, but it isn't my responsibility to do the work. It's yours, and I can only give you marks where you've earned them.
Upvotes: 3 <issue_comment>username_5: These headaches are a reason us educators have a job.
Depending on the person and the work I like to look at their exams again and explain why they lost marks and where they could do better, in the rare case that this doesn't do them justice I make it clear that on this exam, today, right now they do not deserve the marks they're asking for. I'll typically throw a few tips at them and tell them to prepare better so they can make up the marks next time.
I like to be open-minded when it comes to a kid's marks because I know a lot of incredibly bright students who make silly and sad mistakes I know they could've avoided. Totally shutting them out and saying things like "marks are not up for negotiation" can really destroy the self-esteem of these students and make them lose hope throughout your course.
Upvotes: 2
|
2017/04/03
| 1,030
| 4,212
|
<issue_start>username_0: I am currently a (USA) Master's student studying biochemistry. I intend to get a PhD after my Master's but I am doing a Master's first because my husband is going to finish his PhD and want to move to another city before I would be able to finish a PhD at my current university (this was planned from the beginning, I applied to the Master's program, not the PhD program).
Normally, people in my program graduate in 1.5-2 years. I could graduate in that time frame and find a tech job in the area or I could continue my research in my program until my husband finishes (~3.5 years from the time I started). Given where we live, I think the job I could find would be a job that will help me develop additional lab skills, but probably not result in publications.
My advisor is happy for me to stick around and it would be convenient for me to do so, but I'm concerned that it will look on a resume/PhD application like I was either lazy so it took so long or just bailed from a PhD and dropped to a Master's.
I understand that if I had a fantastic publication record it might make up for the long Master's program, but I expect my publication record to be average.
In summary, **does it look bad on PhD applications to extend the length of my Master's program instead of getting a job as a technician for 2 years?**<issue_comment>username_1: It seems to me that the strongest argument in favor of a 3.5 year master's degree you mustered was "it would be convenient for me to do so." You said yourself that you don't think the extra time spent will pay sufficient academic dividends; rather, you are hoping to break even but worried that it will look bad. I think your worries on this are plausible: 3.5 years is a long time to spend in a master's program. The explanation of wanting to stay in the same city as your husband is certainly reasonable from the greater perspective of life, but I have a hard time seeing how your advisor could swing that in a really positive way in a recommendation letter.
Why don't you want to spend a year or two working at a "real job," presumably making real money, and (bonus) developing skills that you could then use in a PhD program?
**Pro Tip**: staying in grad school because it's easy, or because you're vague on the alternatives, is pretty much never a good idea. The time dilation experienced by an insufficiently motivated grad student is amazing: it is not an exaggeration to say that many grad students spend a year doing what a really passionate (and capable, and well-trained) student could do in a month or even a week. Again, that is not an exaggeration: I've seen plenty of instances. The thing you're allowing a 2:1 (or 12:1 or 50:1) time dilation on is in fact (a big part of) your life.
Of course, the flip side of all of that is: tell us again why you can't do good or great work rather than average work? And also talk to your advisor about it:
>
> My advisor is happy for me to stick around...
>
>
>
Not great or even good. She either has an overly self-centered attitude to student mentorship or somehow has gotten the impression that you're not a very serious student. But you could be if you wanted to, right? And if you're really not...then just finish up your degree and move on.
Upvotes: 3 <issue_comment>username_2: It depends if you come out of it with strong publications for the extra time or really needed it to fulfil the degree. Of course, it won't matter much after PhD if you do well then and build up a strong CV.
The main immediate concern is really financial. Why continue as a student? If your advisor is happy with your performance you could work for them short term to continue the project (and publish). This is commonplace in my country.
If you're planning on doing a PhD and have a good relationship with your current research group other may be worth considering. Speaking from experience (as I've been completing my PhD for the past year with my fianceé in another country), long distance is hard bit it is a viable option for many people. That's a personal choice and I won't pressure you into it but it's worth considering if you feel the career opportunities are worth it.
Upvotes: 0
|
2017/04/03
| 1,169
| 4,461
|
<issue_start>username_0: It's always been my understanding that age precedes nationality in standard adjective order (see, for example, [The Royal Order of Adjectives](http://www.dailywritingtips.com/the-royal-order-of-adjectives/)); thus, it should be "young Italian adults" rather than "Italian young adults".
However, I'm revising a demographic study and decided to double check this by searching Google Scholar. To my surprise, I found 539 uses of the latter vs. 217 of the former. Regular Google delivers 88,700(!) vs. 1830, and I get similar results with various nationalities and also from Google Books (though not from the news).
I don't think such a large discrepancy could be ascribed to overall misuse, and the "incorrect" order occurs mostly in academic texts. So, my questions:
1. Is there some reason why the order of age and origin adjectives
should be reversed in academic (specifically demographic) texts?
2. Is that really the standard?<issue_comment>username_1: I speak English only as a second language, and I don't do research that involves young adults, but my guess is that it's because "young adults" is a fixed expression that is very close to a compound word. So it feels weird to split it with an adjective. This would be a good question for English.SE, but I think that we are in the middle of a linguistic process where this expression is transitioning to become a true compound word.
Anyway, it is a common saying that "the international language of academia is broken English", so you shouldn't take academic texts as an example of style. For most academic writers English is a second (or third, fourth...) language, and many write by translating word-by-word constructs from their native language. It shouldn't be surprising to find the wrong word order.
Upvotes: 7 [selected_answer]<issue_comment>username_2: Consider that the study is quite likely to be comparing *young adults* from a number of countries. The sentence "The Italian young adults have an average location further north the the Australian young adults" would work if you used "young Italian adults", but it would start to break down logically if you tried to avoid repetition "The Italian young adults have an average location further north than those from Australia" is fine but "The young Italian adults have an average location further north than those from Australia" seems to refer to people from both Italy and Australia.
The adjective order rule is a *descriptive* rule. When writing it can be treated as a rule of thumb at best. Consider the example in your link:
>
> Have you ever wondered why we instinctively say “the shiny new red car” and not “the red new shiny car”?
>
>
>
Now the following conversations:
* "Which shiny new car is yours?" "The *red* shiny new car."
* "Which new car is yours?" "The *shiny red* new car."
Apart from the excessive repetition these are more natural orders because the important new adjectives come first.
Upvotes: 4 <issue_comment>username_3: In English, adjective order generally infers emphasis and grouping, rather than the adjectives being always equivalent and conforming to a standard order by type.
"Italian young adults" is probably making a distinction about the Italians amongst young adults.
"Young Italian adults" is probably making a distinction about the young amongst Italian adults.
Academic documents are probably more likely talking about young adult demographics, so the nationality is the more specific distinction and so that adjective is used first.
The above would be true even if (as other answers have pointed out) "young adult" were not a common demographic phrase.
Upvotes: 3 <issue_comment>username_4: "Young adult" is probably a set phrase in the literature. E.g. "young adult fiction."
Upvotes: 0 <issue_comment>username_5: A few weeks back, a viral post on "grammar rules you don't know you know" (not an exact quote, but close) showed up on my social media feeds. It was on adjective order for English as a Second Language Learners.
[“Adjectives in English absolutely have to be in this order: opinion-size-age-shape-colour-origin-material-purpose Noun."](http://www.bbc.com/culture/story/20160908-the-language-rules-we-know-but-dont-know-we-know) As you can see, you're asking about origin->age->noun, because it feels wrong to you, and the (semi) untaught rule agrees.
The link isn't to the exact article I read, but is the same author discussing it.
Upvotes: 1
|
2017/04/03
| 1,016
| 3,887
|
<issue_start>username_0: As the title suggests, the issue is as follows:
>
> I finished my (4 year) Bachelor and my (2 years) MSc at department A. A few years ago, the department changed its name and also the character of its undergraduate studies. Now it's a department of *Engineering* (5 year studies) and it belongs to the Engineering division of the university, as opposed to the division of Natural Sciences where it was before.
>
>
>
**Question:** Does it matter how to list it? In any case, what is the best way to list it on my CV?
The most obvious thing would be to list it as it was formerly known. But this department does not exist anymore and, moreover, it changed division so there is a chance some people would be confused. Another option is to list the former name/division and in parentheses it's current name, but it's way too long and I do not like it, not only for aesthetic reasons. The option to list the *current* name/division is out of question for obvious reasons.
Thank you in advance.
Note: There is a possibly relevant question about changing the name of the *university*. I am not sure if it is the same as my issue because I am talking about the change of the name and character of a *department*. The context is possibly different and could generate different answers.<issue_comment>username_1: List it as the degree you earned, as it was called at the time you earned it, because that is an accurate reflection of what you studied, and it's what your degree says (or what a verification would say). If an employer gets to the stage of verifying your degree, you could let them know that the program changed name, but it's not strictly necessary (and they won't always tell you before doing a verification).
Programs come and go and change names, and whole universities open and close, but even closed programs/universities have alumni and their studies/degrees are still valuable for the knowledge gain (at that time) they represent.
Even in cases like [this one](https://www.nytimes.com/2017/04/05/us/high-school-journalists-principal-quits.html), where credentials were called into question based in part by a school not conferring a particular type of degree claimed, the fact that they historically did (or didn't) could be answered by a simple question and there are opportunities to explain the name change.
Upvotes: 3 <issue_comment>username_2: I have actually had a similar experience with a couple of my qualifications. When I asked, I was advised of a few options:
Option 1: Very simple option, I only use this for qualifications earned a while ago and are not immediately relevant to my current research (I have 7 qualifications in 3 different fields):
>
> [Degree]
>
> [University name]
>
>
>
or:
>
> [Degree]
>
> [overarching Faculty name]
>
> [University name]
>
>
>
Option 2: A bit more descriptive and possibly overly elaborate - as in my experience, I have never really been asked about the departments, rather, I have been asked about what I learned in the degree itself.
But, these are included here as suggestions.
>
> [Degree]
>
> [Current Department name]
>
> *formerly [Previous Department Name]*
>
> [University]
>
>
>
Option 3:
>
> [Degree]
>
> [Previous Department name]
>
> *currently [Current Department Name]*
>
> [University]
>
>
>
A key thing I was advised about the CV was to keep consistent in how the entries were listed.
Upvotes: 1 <issue_comment>username_3: This happened to me when the School that awarded my PhD changed its name after I received my degree (the larger university name is the same). I continue to list the School/degree title that it was when I completed it. The new name doesn't fully capture my coursework there. No one has ever asked about it for jobs, but if it came up I would explain to them that it changed.
Upvotes: 2
|
2017/04/03
| 514
| 1,947
|
<issue_start>username_0: Currently I am writing my master's graduate thesis. I am confused that which ordering style is more formal in a thesis reference list. For instance, consider the following three reference lists that I took as it is in order:
>
> [1] <NAME>, <NAME> Incomplete pairwise comparison and
> consistency optimization European Journal of Operational Research, 183
> (1) (2007), pp. 303–313
>
>
> [2] Z.S. Xu Goal programming models for obtaining the priority vector
> of incomplete fuzzy preference relation International Journal of
> Approximate Reasoning, 36 (3) (2004), pp. 261–270
>
>
> [3] <NAME> Alternative modes of questioning in the analytic
> hierarchy process Mathematical Modelling, 9 (3) (1987), pp. 353–360
>
>
>
Here the reference lists were not written in alphabetical order. Is there a general rule (convenience) for better formatting?<issue_comment>username_1: There are two systems: list the references in alphabetical order, or list the references in the order they are cited in the text. In my experience, which of these two systems is used depends on the academic field you're in.
Before computers made searchable pdf files common, the first system was really useful when looking for a specific author or reference in the bibliography, and the second was really useful when looking for where a reference was cited in the text.
With current technology, I don't think it really matters which system you use, although you should pick one and stick to it. My advice would be to check some other theses submitted to the same department to see how they did it.
Upvotes: 2 <issue_comment>username_2: Each academic discipline has a citation style it uses. For example, social sciences use APA. Unless your university and department has specified a specific citation type/style, I would recommend using the citation style: APA, Chicago, MLA used in your academic discipline.
Upvotes: 0
|
2017/04/03
| 2,074
| 8,789
|
<issue_start>username_0: I am supposed to meet with a PI to discuss the possibilities of joining his lab as a MSc student.
I have had one meeting with another PI last week, but it went bad. Essentially he was repulsed when I told him about my GPA and mostly, when he asked me specifically what exactly I want to get out of the MSc experience at his lab (e.g. specific instrument that I'm interested in mastering, etc.). He told me that I should know exactly my goals prior to starting a grad program and that an interest in the field isn't
sufficient.
I agree, however, no matter how much literature I read in that field, I will still have that broad answer which is "I want to contribute to your current project" because as a MSc student, very little I can do in terms of research projects. Most graduate students find their passion once they start their program.
He was also repulsed when I told him that I intend to work in the industry, and I was surprised because the majority of students with higher degree decide to go to work in the industry (e.g. GSK, Pfizer, etc.). Perhaps I should have lied? But that shouldn't be the basis of our relationship as a student and a PI. There is no shame in pursuing a research degree because I intend to go to R&D. Why do most PIs make me feel that I should always answer that "I want to become an academic researcher?", because I don't.
If I made mistakes in the previous interview, please highlight them to me so I can avoid making them again for when I speak with the next PI (whose main concerns is funding). Should I not have said that my ultimate goal is to work in the industry? And how exact should I be in terms of my goals as a MSc student?
UPDATES: I met the new PI and it was an instant rapport! I clearly understand now how being genuine about your goals and educational background can serve you well in such situations.<issue_comment>username_1: After recently meeting with dozens of potential advisers in multiple institutions and departments over the last year, my advice is to try to get yourself out of the mindset of merely trying to impress them. Sure, you need to put your best foot forward, but there's something about academia that it's important to understand: there is a massive amount of variability between individuals, departments, institutions, and fields, in a huge variety of ways. What is impressive to one person can be off-putting to others, or a sign of a bad match.
One bad meeting isn't enough data to determine anything, but more importantly, this might have actually been a really great meeting, because you found someone who might be very unpleasant to work with because they are not supportive of your goals - short-term and long-term - at all. That's a terrible match, and finding that out is great.
As a personal example, I was talking with one prospective advisor and told them about my long-term vision (10-20 year career goals), and the response was that it's important not to be too ambitious, and basically that most people never do work that has much of an impact so you shouldn't aim so high.
I gave a similar talk and explanation of my long-term career goals to another potential advisor, and their response was that they were extremely surprised I had developed such a clear vision of what I wanted to do and had the maturity to think about the long-term, and they thought it was really great and they were very positive and enthusiastic about talking about how to make it happen. They even gave constructive criticism on how to connect short-term goals to long-term ones, what to do to get started right away, etc.
One plan, two completely different responses. And you know what? That's OK. This is why people say it's so important to be honest about your interests - don't trick your way into a relationship that will be bad for you!
To ask about desire to go to academia vs industry, this is another issue of fit. When I talked with some people about wanting to be an academic, they actually talked at length about how competitive the markets are, how much better the pay in industry is, are you sure you don't just want to get a job, etc. Other people volunteered that "it is not our job to decide where you want go to for you - it is our job to help you make your dreams happen". And yet others were extremely supportive, realistic, but talked about what kind of strategy would be necessary to make it happen, how to prepare, asked to make sure I really understood what I was aiming for, etc.
In my experience, this was partly a factor of department, and partly an individual factor. Some programs and/or advisors place the majority of their people into academia (60-80% is not unheard of), and they are proud of that fact, sometimes even flat out saying on their website that is the goal of their program. Other places place more on the order of <10% into academia - so why on earth would they have an issue with people going into industry, and if you wanted to go into academia shouldn't you prefer a different department?
What To Do Different
--------------------
I did detect a few things I might suggest you adjust, though. One is how to talk about your goals. "Go into industry" is a common phrase, but it also suggests you don't really have an idea why you want to go to grad school or do after. You might be more specific, such as how you want to contribute your developing expertise within a R&D department to develop drugs with less side-effects, or to fight aging, etc - whatever suits your interests. If you are also open to other ideas, say that too!
If you say you just want to learn an instrument, that might sound too limited, depending on what you were naming. Going to grad school to learn to use a centrifuge might sound unambitious. Learning how to do advanced gene sequencing and interpret genetic-based drug interaction models - that just sounds a bit more developed, but you again should focus on what you care about.
What if you just honestly don't know? I have found the best result is to say what you have explored. Something like, "Well, I've explored anti-aging treatments, and degenerative diseases, and infectious outbreak analysis, and I've actually really enjoyed it all so I'm having trouble deciding. I'm open to trying a few different approaches and subjects in this field, as I've so enjoyed the work I've had the opportunity to do so far." You can note limited opportunities to explore, etc. This is all honest, just a bit more useful and interesting than "I dunno".
Above all, keep at it! I suggest meeting with over a dozen people, if you can possibly manage it, even outside your immediate field of interest if necessary. Focus on learning from the interactions and having a mutually interesting exchange - and don't jump to conclusions on too few data points.
Upvotes: 6 <issue_comment>username_2: BrianDHall's answer has some great advice, in particular the idea that these meetings are for your benefit as well as the potential advisor's, in finding out whether or not the two of you are a good match.
I want to focus on one bit of your question:
>
> ... he asked me specifically what exactly I want to get out of the MSc experience at his lab ...
>
>
> ... no matter how much literature I read in that field, I will still have that broad answer which is "I want to contribute to your current project" because as a MSc student, very little I can do in terms of research projects. Most graduate students find their passion once they start their program.
>
>
>
I suspect that there were a few things that your potential supervisor was seeking to establish in this line of questioning:
* Are you genuinely interested in the subject matter, or are you doing the MSc because you can't think of anything better to do?
* Where exactly do your scientific interests lie? Are they in the techniques used, the subject knowledge, the applications of the work...?
* Are you able to show some independent/creative thought?
You may be correct that in reality, once you start the MSc, you would end up just "contributing to the current project" in a way largely dictated by the supervisor. But that is not a very helpful answer to give in the meeting. Instead, you need to make use of the background research that you have done. Talk about some different aspects of the supervisor's work. Demonstrate that you have some idea of where the "knowledge gaps" lie. It doesn't matter if the ideas you suggest are not possible to fit into a MSc project. It doesn't matter if you list several different aspects and say that you don't yet know which interests you most. It doesn't really matter *what* you talk about at this point, just so long as you talk intelligently and enthusiastically about *some* aspect of the project.
Upvotes: 3
|
2017/04/03
| 1,145
| 4,837
|
<issue_start>username_0: Our grades for the Coding class I am taking are divided between projects and exams. The professor I take the class under has the following policy on her syllabus:
If our cumulative average for either our exams or projects is below 65% we receive an F in the class.
For example, lets say your class grade was an 75% (C) with a 90% average for projects, but a 60% average for exams. You would receive a failing grade. To me this seems highly immoral considering some students simply aren't good test takers.
Just curious, but is this policy legal in any way?<issue_comment>username_1: "Legal" is a very specific thing - and there's almost certainly no law governing how university professors in your area grade. So I would abandon the notion of *legal* right now.
Now, *is it a good idea*? There are arguments that could be made both ways - as you've noted, some students might not be good test takers, but on the other hand, failing any particular aspect of a course betrays a lack of mastery of the subject. This also prevents people with "good enough" scores to blow things off that won't harm their average much but might lower their score in a particular facet.
When it comes down to it though, professors have a *massive* amount of leeway in how they grade courses.
Upvotes: 2 <issue_comment>username_2: It doesn't seem too unusual to me, and I don't think very many people will share your view that it is "immoral".
At least in US universities, the professor has quite wide discretion to determine the grading policy, so long as it is clearly described in the syllabus and applied equally to all students. This seems well within the realm of acceptability. Schemes based on some weighted average of assignments and exams are the most common, but I've certainly seen schemes where you have to have a minimum grade on certain components to pass.
As far as "legal", laws tend to avoid micromanaging academic matters, so I would be very surprised if there is any law that would forbid this. University policy is more likely to address it, but again, I'd be surprised if what you describe wasn't allowed. I don't think you will have any success trying to fight this at a higher level.
As far as it being unfair to students who are poor test takers, bear in mind that in many educational systems, it is common for the course grade to be determined *entirely* by a single exam. Rightly or wrongly, there's a long history of evaluating students by exams, with the attitude that students who aren't good at taking tests need to find a way to get better at it. (Accommodations may be made for students whose test-taking difficulties are due to a diagnosed learning disability, but this usually means adjusting the exam conditions rather than de-emphasizing exams in general.)
You are certainly free to express your disagreement with this policy, either directly to the professor or in anonymous evaluations, but it may not be possible for her to change it after the course has started. This could be seen as unfair to students who have been setting their priorities accordingly.
Upvotes: 4 <issue_comment>username_3: Most likely, the exam and the projects are intended to test *different* skills. So, it is only reasonable that a student has to pass both to pass the course. This is common practice at most courses in my department (in the Netherlands).
When I design a course, I have to specify the 'learning goals' and I have to explicitly state where and how I test them. My course might have five learning goals, three out of which are tested in an exam and two in some project work. As passing the course means that I testify you achieved all learning goals, it means that you have to pass both, and you cannot compensate a fail on one part with a good grade on the other. Therefore, in my view, the professor is doing the right thing.
Whether 65% is the right cutoff value probably depends a lot on the context, which is why we cannot judge that (see also the comments to your question).
Upvotes: 3 <issue_comment>username_4: It's an obvious way to stop students gaming the marking system, by only doing half the work for the course but still getting a pass grade.
It is a common practice outside academia for professional qualifications (e.g. accountancy) - why would you want to call somebody a "professional" if they aren't at least competent (even if not experts) in *all* the aspects of their profession?
Of course some people might take the view that anything which prevents *every* student from getting a degree is "immoral" - but I'm not one of them. If you take that to its logical conclusion, just issue everybody with a degree certificate when their birth is registered, and save all the time and money that is currently wasted on building schools, paying teachers, etc!
Upvotes: 0
|
2017/04/04
| 527
| 2,298
|
<issue_start>username_0: I'm an undergrad student at a fairly well known math program (top 20). I have taken several upper level undergraduate courses (real/complex analysis, honors abstract algebra) and graduate level courses (commutative algebra, topology) and I also did some independent learning with one of the professor here in homological algebra, and I did very well in my math courses (All A and A+). However, some of my geneds are not very good (I don't feel like I'm interested in learning them because I think they are useless and I would rather spend time learning math by myself rather than study for the exam). My question here is: How bad will it be? Will those general education courses hurt my chance of admission for grad school? Thanks<issue_comment>username_1: I'm not in your discipline, so I'm not sure how much this is similar...When we evaluate doctoral and master-level students for admission, we first check if their GPA meets the minimum criteria. If it does, then we look to see how well they did in courses that would help them succeed in our graduate programs. If your GPA hasn't suffered too badly because of the non-math courses, they might not hurt you. However, I would caution not to mention to admissions personnel/reviewers that you believe the non-math courses were "useless" so you blew them off. That doesn't sound very professional- if asked, I would say that you focused more on the math courses to prepare for your graduate studies, which took time away from your other coursework.
Upvotes: 3 <issue_comment>username_2: My answer is based on limited experience with being on graduate admissions committee at an R1 university in the US. My sense is that it won't make much of a difference, though the effect can be lumpy. Most people will probably barely notice, but probably there are people who will think that this reflects a worrying lack of discipline, as this can be a big problem in graduate programs. Most professors have had bad experiences with students who think that requirements shouldn't apply for them, and that they can blow off ones they don't feel like doing; giving that impression will not help you. So, it could be a problem if you are right on the edge between admit and not, but it's a little late to worry about it now.
Upvotes: 3
|
2017/04/04
| 1,138
| 4,773
|
<issue_start>username_0: Due to my wife’s job, we’re moving to a country in Latin America (Brazil). I’m finishing my undergraduate in mathematics in the US and I’m starting to think about applying for PhD positions.
How does having a PhD from Latin America impact an academic career in mathematics in the US?<issue_comment>username_1: There's nothing specific to Latin America in this question, as in any place there are good and bad PhD granting institutions. You should follow all known advices in applying to PhD positions in any country (check the reputation of the university, of the department, of the professors, of the journals they publish, etc.)
You don't mention the city you'll be living but most likely, if you are in a large center, there will be a reputable PhD program in Mathematics. By this I mean that you'll be able to work with people that are actually doing research in mathematics and publishing in top journals. Depending on the city, you may be constrained to working on some specific areas of mathematics though.
To answer your question: If you have an active researcher as an advisor, and if you are able to publish good results during your PhD, you will be able to compete for postdoc positions in top centers in the US.
Upvotes: 1 <issue_comment>username_2: The institution from which you have a PhD is of minor significance compared to:
* What you've published
* Where you've published
* Who you've published/worked with (particularly, your advisor)
* Who can recommend you to others
With all of those being equal, a PhD from a better-rated *university* (not country) will improve your evaluation as a candidate to some extent, I suppose.
However, I believe you're thinking about this the wrong way. I would suggest trying to find specific potential post-doc hosts - researchers whose work is related to what you are interested in - and contacting them directly rather than thinking of "applications" in the general and the abstract.
Upvotes: 2 <issue_comment>username_3: While the answers telling you that what matters is your advisor and the research you do in graduate school, not the institution, are in a certain sense right, I feel like the underlying message of them is pretty misleading (or at least incomplete).
It seems worth mentioning, first off, that professional mathematicians working in the US who got Ph.D.'s in Latin America are extremely uncommon. I'm sure there are some, but I literally cannot think of any off the top of my head. In fact, professional mathematicians in the US who got Ph.D.'s anywhere outside North America or Europe are incredibly rare. Look through the faculty of any math department you like, see if you can find any. There will be a fair number of faculty who were born or grew up or even got a bachelors in another country, but very few who got a Ph.D. outside the US. Actually every time I've done this, the number of schools outside, say, the top 10 ranked graduate programs in the US that appear is always terrifyingly low.
Now, it's actually quite hard to conclude anything from this exercise, since it is obviously an uncontrolled experiment. I suspect that part of what's going on here is that if you're in Latin America, and you're interested in moving to the US to advance your career in math, you won't wait until after graduate school. In general, for both intellectual and funding reasons, US graduate programs are the most attractive in the world, so that's usually the point where someone who's open to moving will do so. Similarly, probably the most important reason why the best-ranked schools show up so often is that they take the best students into their program.
So, that doesn't prove that you can't go to any institution and be successful. However, I wouldn't underrate the importance of the institution you go to in terms of shaping your ability to do good research and develop the kind of network you need. Being with the very best students (at least in my opinion) has a very big effect in and of itself. Being in the US gives you access to conferences and networks of people in the US, which is what you want to build if you want to be there long term.
Also worth thinking about: what kind of teaching experience will you have at the end of your degree? If you want to be able to get a job at a teaching focused institution, you need to have teaching experience, and you need to have people who can credibly vouch for it. Are you going to be able to get that in Brazil?
So, I would remember: you don't have to start a Ph.D. right after your B.A. It might make more sense to get a job, or work on mathematics unofficially if your wife is only going be in Brazil for a couple of years. If it's going to be for longer, then you have a trickier choice.
Upvotes: 2
|
2017/04/04
| 1,170
| 5,051
|
<issue_start>username_0: I am a member of ResearchGate.net and I sometimes receive this 'full-text request' for some of my publications. Thus far, I generally provide them an access to my publication. However, I am not sure if this is good since the profile of the person who made the request is not always visible. Perhaps, some people are taking advantage of this somehow. Also, could this lead to any issues with the publishing organisation? Any ideas and recommendations?<issue_comment>username_1: I am also a member of ResearchGate and also sometimes get full text requests, as do my co-authors. What we do is if we are not able to email the scholar directly, we privately send a pre-print to the person asking (where we can) - usually with a message thanking them for their interest and an invitation to ask any questions should they arise.
You do have to be careful regarding the publisher's rules on this, look on their websites and if in doubt, ask.
Upvotes: 2 <issue_comment>username_2: **Work out if you can post pre-prints** Many journals allow you to share some form of pre-print online. In some cases there is ambiguity as to whether posting pre-prints to ResearchGate is allowed by an agreement. For example, some publisher agreements limit sharing to personal websites, institutional repositories, etc. Nonetheless, most take-down requests that I've heard about on researchgate pertain to posting the publisher's formatted, full-text, version. [Sherpa ROMEO](http://www.sherpa.ac.uk/romeo/index.php) provides general guidance about publisher policies.
**Post your pre-prints in a range of forums.** So, it makes sense that where possible you should provide the pre-print in a range of forums in order to facilitate the distribution of your work to those who do not have institutional access. There are many discipline specific repositories where you can post pre-prints.
**Post the pre-print to ResearchGate.** If you are on ResearchGate, then you have indicated a desire to use the system. So you should consider adding full-text pre-prints where you feel comfortable. That way, you wont get these requests because the full-text is already available.
Upvotes: 3 <issue_comment>username_3: You could request that the person make a formal request to your email account, if you are concerned that there is something strange about the individual not having their profile visible. Also, many journals (you may have to check with yours) will allow you to distribute an early version of the paper you published. For instance, a pdf of the last submission before acceptance. If you do not want to give the full, published paper out, then this is an option.
Most libraries have interlibrary loan, so I feel that it is somewhat lazy for a person to directly contact the author for a print, unless the request is from someone outside of academia or in a developing country. Those are the only reasons I have given out a full-text version of my paper, given that the request appears sincere.
Upvotes: 2 <issue_comment>username_4: First off: **Receiving these requests means that people want to read your work**. In nearly any academic scenario, **this is a good thing.** Embrace it, and help them! It's difficult to imagine what nefarious purpose could be served by somebody asking to read your paper for free, when they could read it anyway by doing an inter-library loan or paying a publisher $30. Some people consider ResearchGate *itself* to be nefarious; if this is you, you will have to choose between making your work more accessible and avoiding contribution to RG.
So, assuming that you want to help this person read your work, you need to check what you are allowed to do. Then,
1. If you are allowed to upload the final version of your paper, do that. This is probably only the case if you have paid the journal for so-called "Gold OA" and it has a permissive license.
2. If you are allowed to upload a post-review preprint ("postprint") to ResearchGate, do that. [This question](https://academia.stackexchange.com/questions/81642/does-a-researchgate-profile-count-as-a-personal-website-for-green-oa-purposes), and its answers, may be relevant here.
3. If you are not allowed to upload the paper to ResearchGate, it would be worth adding a note to that effect to the description of the publication, and (if you are allowed to upload it elsewhere, such as a personal website) giving a link to where it may be found. If contact details for the person who made the request are visible, you may wish to write to them privately and tell them where they can find it, or email them a copy.
Note that in addition to "this person wants to read your work" messages, ResearchGate occasionally sends "reminders" for any publications that don't have full-text uploaded asking you to provide it. You might want to follow step 1 or 2 above if applicable, but otherwise these are best ignored. Or, to stop the emails, you could upload a PDF that simply contains a statement of where the paper can be obtained from.
Upvotes: 4 [selected_answer]
|