date
stringlengths 10
10
| nb_tokens
int64 60
629k
| text_size
int64 234
1.02M
| content
stringlengths 234
1.02M
|
|---|---|---|---|
2021/06/15
| 299
| 1,242
|
<issue_start>username_0: Here is the link to the journal: <https://www.journals.elsevier.com/computer-methods-and-programs-in-biomedicine-update/>
I'm not sure why the fee is waived or if this is a red flag in any way?<issue_comment>username_1: It shouldn't be a problem for an established publisher. Perhaps they just wound up with some extra funds and decided this would be good community relations. A smaller press might have gotten a grant for such things, but this just sounds like a feel-good gesture.
It is also possible that they have a need to catapult submissions for some of their journals. But I don't see any negative aspects to it.
Upvotes: -1 <issue_comment>username_2: Article processing charges are quite often waived for new open access journals (as this one appears to be - Volume 1 is listed as "in progress"). As I understand it the hope is to attract researchers to publish in these journals, even though they might not yet be listed in particular databases (Scopus, PubMed etc.) and don't have impact factors (or other ways to measure journal quality, good or bad). It's a way to kickstart a journal with articles to prove it should be taken seriously (and presumably attract paying submissions later on).
Upvotes: 2
|
2021/06/15
| 468
| 1,712
|
<issue_start>username_0: I've been looking for where I can read: <NAME>. (1921). "Correlation and causation". *Journal of Agriculture Research*. **20** (7): 557-585.
I've tried the following without success.
* Going directly to the Journal
* Google search engine
* Google Scholar
* Research Gate
* Search tools at my university's library
* Asking a research librarian for assistance
Articles by the same author in a similar time period are available, such as [Systems of Mating. I. The Biometric Relations Between Parents and Offspring](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC1200501/pdf/111.pdf). It isn't clear to me why the particular article I wish to read is difficult to obtain.
Is this article available somewhere?<issue_comment>username_1: People often neglect to ask for help from a research librarian at a local university. In fact, even a village library, such as that where I live, has a librarian with "contacts" at nearby libraries, including university libraries.
Research librarians are trained to find such things and also have a network that they can use to chase down things. They are a valued resource that every research should become familiar with.
They can not only uncover difficult to find books and papers, they are excellent at taking descriptions of research "needs" and translating them in to the papers that cover them.
Upvotes: 0 <issue_comment>username_2: Here is your paper, via Google Books:
<https://books.google.co.ao/books/about/Journal_of_Agricultural_Research.html?hl=pt-PT&id=lNNdIV_qpwIC&utm_source=gb-gplus-shareJournal>
Check page 557 for the first page of the paper (if you download the pdf file, go directly to page 738).
Upvotes: 3 [selected_answer]
|
2021/06/16
| 313
| 1,192
|
<issue_start>username_0: I feel a bit tired and even exhausted of the work i am doing, would it be convenient to ask my Ph.D. advisor for few resting days to full up some energy and return more productive ?
Thanks<issue_comment>username_1: People often neglect to ask for help from a research librarian at a local university. In fact, even a village library, such as that where I live, has a librarian with "contacts" at nearby libraries, including university libraries.
Research librarians are trained to find such things and also have a network that they can use to chase down things. They are a valued resource that every research should become familiar with.
They can not only uncover difficult to find books and papers, they are excellent at taking descriptions of research "needs" and translating them in to the papers that cover them.
Upvotes: 0 <issue_comment>username_2: Here is your paper, via Google Books:
<https://books.google.co.ao/books/about/Journal_of_Agricultural_Research.html?hl=pt-PT&id=lNNdIV_qpwIC&utm_source=gb-gplus-shareJournal>
Check page 557 for the first page of the paper (if you download the pdf file, go directly to page 738).
Upvotes: 3 [selected_answer]
|
2021/06/16
| 627
| 2,699
|
<issue_start>username_0: When Ph.D. was first inducted into the universities, what was its actual purpose?
1. increasing employability
2. becoming eligible for working as a researcher
3. becoming eligible for obtaining a tenured professorship
4. vanity and prestige
...
??
Is the purpose the same or different nowadays?<issue_comment>username_1: In general, a PhD degree is the means by which you prove that you are capable of conducting research in an independent manner, without being told what to do by a professor. In other words, by obtaining a PhD degree, you prove that you are capable of contributing a completely new piece of knowledge to the currently existing body of scientific knowledge. You also prove that you are capable of defending your scientific hypothesis against external attacks by the reviewers.
Therefore, for a long time a PhD degree was primarily intended to help you become a researcher, or a faculty member in some higher educational institution. Students who entered a PhD course were expected to join the ranks of academia, or prestigious research labs.
As you can imagine, things have changed a lot. Universities smelled the sweet scent of money, and dramatically increased the number of PhD candidates beyond anything that was remotely reasonable; as a result, now, you have PhD degree holders working at Starbucks or at restaurants, trying to pay off their massive debt.
Therefore, there is now a push to redirect most PhD candidates into non-academic careers, instead of joining the dwindling, bitterly competitive ranks of academia.
Upvotes: -1 <issue_comment>username_2: Historically and culturally, the main role of a university is the advancement and transmission of knowledge. Universities award degrees to students as a recognition that they have achieved a particular level of knowledge in a particular domain. So from a very big picture point of view, employability inside or outside academia is just a consequence: people are qualified for jobs *because* they have the required knowledge, as confirmed by their degrees. Of course universities and students exist in the real world so the mundane necessities of life make employability a significant part of the question, but in principle at least it's not the main purpose of academia.
Vanity and prestige might also be a part of it, but that's true of any kind of achievement. In theory a PhD is just a diploma which confirms that the incumbent has the required knowledge and skills to do research in their domain. Historically this qualification led mostly to academic research, but it's clear that nowadays there's also a demand for PhD-level skills outside academia.
Upvotes: 3 [selected_answer]
|
2021/06/16
| 583
| 2,553
|
<issue_start>username_0: I am wondering can a post doctorate researcher review a PhD thesis? I found many discussions in reviewing research articles. Reviewing articles for Journals is an integral part of research and very important for the career of a Post doc. But I did not find a discussion whether post doctorates are eligible to review PhD thesis?
So if a post doctorate receives a PhD thesis review request from a university, then should the postdoc researcher accept to review it if interested? Or must reply and inform the university to check his/her eligibility?<issue_comment>username_1: Rules concerning who can review theses or be on a theses committee depend on the country and probably on the university.
In my experience the university chooses the reviewers and should check whether they satisfy their requirements. I do not think that it is the reviewers responsibility to check this.
Upvotes: 2 <issue_comment>username_2: As username_1 mentioned above, rules vary per university, and there are even sometimes different rules within graduate schools in the same university.
What happens in the majority of cases, is that the supervisor of a PhD candidate (or the department) will invite people to become members of the PhD review committee.
In theory (remember, the rules vary from place to place!), anybody can be invited to be a member of a review committee, however in 90% of the cases, only full professors are invited. Generally, 2-3 people are full professors recruited from within the university, and another 2-3 people would be people from external institutions (also normally full professors).
The reason why the rules allow anybody to be invited is because there are people who, despite not having a PhD or Master degree, have had a long career in industry or in a certain field, and their career achievements are recognized as being significant, and very valuable.
Therefore, it would be highly unusual for a postdoc to be invited to become a member of such a committee. You would need to have some highly specialized knowledge (relevant to the PhD candidate's research topic) that could not be found in any other person. In fact, if a professor invited you to be a member, that person would probably have to write some text or report justifying the reason why you were invited (instead of someone else with a longer, established career).
If you are invited, it should be fine to accept the request, but personally I would feel a bit suspicious (ex: why they cannot find a person with more credentials?)...
Upvotes: 1
|
2021/06/16
| 485
| 2,129
|
<issue_start>username_0: Speaking pragmatically, does it look odd or amateurish to write "this work was not supported by any funding" or something like that on a paper? Would it seem to suggest the author is, as I say, an amateur/non-academic, or is not doing enough research to secure grants consistently?
The context of the question is, I'm currently between grants and wondering if I should wait to publish this paper until after I get a new one. It is a silly concern, but still important to me. However, I'm wondering more generally what the perception is of papers or authors who are not supported while publishing.<issue_comment>username_1: It looks amateurish to state that your work was not supported by anything. It is totally ok to just not say anything at all. Lots of papers are not supported by grants. That's likely also consistent with the journal requirements: the require you to list your sources of funding. The list just happens to be empty in your case.
Upvotes: 3 <issue_comment>username_2: It doesn't look amateurish. In most journals I know, the phrasing is not "this work was not supported by any funding" but rather "this research received **no external funding**". And it is a pretty common thing to disclose.
At least in Germany, a lot of university PhD and other positions are funded through external grants. Any publication as a result of these grants would have to disclose the funding. Research done outside one of these grants, being disclosed nevertheless as funded by a grant (as you suggest) would be wrong - and borderline illegal. Especially if the scope of the research and the scope of the grant don't really overlap.
On the other hand, other positions are paid through money that the university receives from the government to provide free education. This is basically state funded research (thus **no external funding**). This does not devalidate any of the research in any way. By disclosing your affiliation with the university (which you do as an author), you basically disclose that you are funded - how else would you hold a position at a university?
Upvotes: 5 [selected_answer]
|
2021/06/16
| 2,743
| 10,924
|
<issue_start>username_0: I have recently discovered that anti-vaxxers have been citing one of my papers on Twitter and elsewhere as evidence of a specific 5G-related conspiracy theory surrounding the COVID-19 vaccine. Of course, my work has no relationship with any vaccine, COVID or otherwise, and their arguments are laughable misinterpretations of my results. In fact, an accurate understanding of the work would be a potent counterargument to this conspiracy theory.
What, if anything, should I do? I have not had any interactions with these people so far, but I'm concerned about my work being associated with them.<issue_comment>username_1: That's the thing with (i) countries that allow for free speech for everyone, (ii) exercising this right yourself by making your own opinion public in the form of a paper: Free speech also includes the right to misrepresent someone else's opinion.
There is little you can do about it unless whatever others claim about your work is slander or is a threat. The only thing that is within your power is to ignore these folks and move on.
Upvotes: -1 <issue_comment>username_2: I'm going to disagree with the folks saying to just ignore the anti-vaxxers.
These people are not cranks, in the sense of [the proposed duplicate question](https://academia.stackexchange.com/q/111413/22733). A crank in that sense is an intellectually isolated person who is merely wrapped up in their own personal eccentricity.
The groups opposing COVID-19 vaccines include [well-funded and purposeful organizations](https://www.ncbi.nlm.nih.gov/search/research-news/7542/) that have in many cases become linked with politics. You may not be able to stop them from citing your work, but you can certainly make a public statement explaining that they *should not* cite your work and why they are wrong to do so. There's no point in having a public fight with a harmless [trisector](https://www.researchgate.net/publication/225381383_What_to_do_when_the_trisector_comes), but there's a damned good reason to have a fight with a group that is actually effectively working to undermine public health.
The question is: are you up for the potential of a public confrontation? They'll probably just ignore you... but they might not. They might add you to their cast of villains, and that might or might not be a risk that you feel able to afford.
Bottom line: I believe that ethics indicates that you *should* oppose the use of your work in this case. The only question is how much you feel is an appropriate investment of energy and taking on of risk given your current personal and professional circumstances.
Upvotes: 9 [selected_answer]<issue_comment>username_3: [<NAME>](https://en.wikipedia.org/wiki/Daniel_Pfeiffer)\* has a [great article](https://messagebox.substack.com/p/how-libs-can-stop-owning-themselves) about combatting misinformation without bringing it more attention. I believe it's largely relevant here. The most relevant section† is here:
>
> The gist of this argument is that the only way to [combat
> misinformation] is to
> shine a light on it. Sunlight is the best disinfectant. The point is
> not wrong. We cannot ignore these dangerous trends [...]
> But how we shine that light matters:
>
>
> **Quote tweet your friends, screenshot [misinformation]**: This is an online
> engagement rule from Dash: If you need/want to push back on
> disinformation or highlight a dishonest or dangerous statement, using
> a photo of the statement allows you to make your point without giving
> the troll the information they need.
>
>
> **Don’t spread disinformation**: If you respond to disinformation for the
> purposes of debunking it, you are inadvertently instructing the
> algorithm to show the offending disinformation to more people. You can
> either use the screenshot trick above or separately share a fact check
> or article that debunks the conspiracy theory.
>
>
>
In the context of stopping the spread of your paper being used for COVID misinformation, I would interpret the above guidelines to mean: Don't retweet or share the tweets containing misinformation, or link to other sites mis-citing your work, even to point out how they're wrong. Instead, create a tweet or a response on another medium from scratch, which can be shared and that fact checks in a way that highlights the facts − not the myths. The key is not giving engagement to falsehoods, and instead try to drive engagement to facts.
\*former Senior Advisor to U.S. President <NAME> for Strategy and Communications
†American political emphasis removed to specifically highlight how this technique is applicable to the question at hand
Upvotes: 6 <issue_comment>username_4: I was in a similar situation: during a few radio programs with a homeopath and a magnetizer (or whatever the man who has cosmic power in his hands is called), I made informed fun of their "science". They were there to answer and it went sideways.
One thing I learned: their groupies are terrifying.
I was in my PhD phase at that time, and in a very hormone-powered confrontational mood so I went in headfirst.
It was great at that time, but I would not do it now (30 years later), for several reasons:
* I do not have the time I had
* I have a family and these people are nuts. They would literally make standups in front of your house. I was living in a campus at that time, so it ended up well (without going into gory details: not that well for them)
* specifically homeopathy companies have lawyers: when you tell them that their science is idiotic (because dilution, atoms and everything), they will drag you to court (I have to find a reference for that I read some time ago).
* After 30 years, instead of gaining the wisdom I was supposed to, I believe that these people should be made to shut the fuck up, because they are a danger for society. So the discussion quickly turns to words usually considered unfit for a scientific discussion.
You really need to consider if you want to fight, **and** if you do, whether this is going to be interesting, amusing, fun, and energizing for you.
---
As a special note for my favourite subject of *"the science of homeopathy"* (closely followed by *"religion and science"*), I was trying with the homeopaths to settle the fact that homeopathy may very well work due to the placebo effect. When I take an aspirin, it does not even have the time to drop into my stomach and I feel that my headache gets better, see?
They insisted that there is a **physical** reason for homeopathy (memory of water, usually) and then we were done.
I also feel that people should understand that they pay 30€ for a placebo effect - which may or may not be fine for them.
Finally, there is the despicable class of preachers of alternative solutions such as these who will use the fear and despair of people and drag them from actual treatments to their crap, endangering their life.
As they say in Kingsman, needed to let of a little steam :)
Upvotes: 5 <issue_comment>username_5: Another reason to make a public statement: **to avoid becoming their "hero".**
Pseudo-scientific conspiracy theorists try to present their views as something fast-growing, something which will very soon become the mainstream, and something which is "supported by more and more scientists". Especially many covid-related misinformation is claiming that "most doctors" are already on their side (even if it's actually just an extremely small but loud minority).
The few doctors or researchers which are genuinely on their side, are heralded by them as heroes. The few I researched are usually ex-medics who now make their living from selling dietary supplements and homeopathy, or have founded political parties, and publicity gained from the controversy is good for their business.
You probably can't entirely avoid to become their unwilling "hero" in social media posts and newsletters (*"more and more scientists, including Icyfire, are rising up to finally admit the truth!"*), but you'd surely want to *not* become that "rebel leader" in the eyes of mainstream science.
Therefore a public post, which doesn't engage in the debate but briefly and firmly states something akin to "that's not what I said" can help you avoid becoming the "hero" of that conspiracy theory at least in the eyes of the mainstream scientific community. It might also help others indirectly: nothing will dissuade the fanatics (they might think you've been bribed even in the rare chance they do look it up) but there are many undecided people out there having a more open mind, and if they see a claim in a pulp magazine or social media post about a scientist having stated something, at least some of them might look up that scientist to see what the original statement was.
Upvotes: 4 <issue_comment>username_6: You do the same thing you'd do in any disagreement: respond with reasonable arguments. You never respond to a critic (no matter how eccentric, wrong-headed, or confused) by telling them that *they're* wrong and *you're* right and that they should just TRUST you.
No. If the whole world is confused about something, you simply let them bask in their confusion and do your due diligence to state your case. You don't have to compromise, but that's all you can do. If it boils down to a pissing match, where neither party wants to listen to the other, then *you* have some work to do as well. I know of no moment in history where one side is completely right and the other completely wrong.
Upvotes: -1 <issue_comment>username_7: This issue can often be broken down into two parts:
1. Someone has misunderstood/misrepresented my research. They are claiming that it shows ABC, but this is incorrect.
2. Someone is claiming that since (as they believe) my paper is convincing evidence for ABC, this implies PQR and XYZ and so therefore [anti-vax/aliens/global cooling/flat-earth/perpetual motion machines/zombie bunnies/...].
Applying [Hanlon's Razor](https://en.wikipedia.org/wiki/Hanlon%27s_razor), (1) could be a legitimate mistake. If you are aware that a misinterpretation of your research is circulating, you might consider what you can do to clarify the situation, and ensure that anyone who wishes to 'fact-check' the claims can easily do so. For example, you could write a plain-language summary of your work, and place it on your website, or publish it via one of the many popular science websites.
Addressing (2) is more challenging, and is the focus of several of the answers here. Such debates are driven by an assortment of individuals, each with their own motivations and perspectives, and you may find yourself drawn into playing conspiracy-theory whack-a-mole. Some people enjoy this game; others don't.
My point is: the claim ABC can be addressed without necessarily getting drawn into the debates about PQR, XYZ, and the bunnies, and it may be worthwhile to do so.
Upvotes: 1
|
2021/06/17
| 584
| 2,380
|
<issue_start>username_0: If I work on a topic and in a field that is also available in my home country, then why stay in a foreign country?
For example, I work in computational materials science. Many researchers in my home country work in this field. If the quality of work is the same in my home country and in a top ranked university in the US, what's the incentive in pursuing research for 5-7 years in a foreign country, living away from family and suffer from loneliness/homesickness?
Assumption: The advisors in the US and in the home country are of equal reputation.<issue_comment>username_1: Sometimes in the Academia mobility is evaluated "very good" just for the sake of itself, without adding any decent thought to it.
In fact there is no point, as you say, unless you feel the reputational burden of considering worthwhile things only if coming from the medieval but highly technological country of the USA.
On the other hand, being immersed in a different scenario for prolonged times really help lateral thinking and motivating oneself and to put at ease its own inner person. We as persons are constantly changing.
Considering your situation, you do not need to move abroad to do that, but you better leave the door open for some extended period abroad during your PhD, something like 3 or 6 months in the US. Keep your eyes open for scolarships allowing that, or discuss it with your potential PhD advisor at the interview stage (later is too late ... you need to put already on the table the discussion about reserving funds, evaluate them in at least 1000eur flights + 1000/1500eur per month above your salary for an US stay, and time for that)
Upvotes: 2 <issue_comment>username_2: On top of agreeing with the answer by username_1 about mobility valued for the sake of itself, I'd like to add that there are sometimes specific grants available only if you're coming to country A and only if you haven't been working in country A for the last N years. In the case A is your home country, doing a phd in a foreign country and intending to come back makes you eligible for these grants, unlike the case with doing a phd in your country.
Of course, yet two other obvious points are the (1) availability of phd positions and the level of competition for them (if any), and (2) salary (or stipendium) which can be very different even across Europe.
Upvotes: 2
|
2021/06/17
| 1,986
| 8,474
|
<issue_start>username_0: After finishing my master’s, I applied for a couple of positions, and the first interview I was invited to was at a renowned research institution in my field. I was living in our capital at that time and the research institute is in a small city quite far from other major cities.
The head of the institute was present at the interview, and at the and of the interview, we had a little “personal” talk (while his other two colleagues that conducted the interview along with him were still there), and he asked me how living in the capital is, and added: “Luckily, in our little city, there are not as many migrants living as in the bigger cities.” From context, it was very clear to me and not only against migrants, but muslim migrants in particular. I was quite shocked to hear such blatant racism from the head of a research institute and was very tempted to say something, but in the end, I pretended as if I had not heard it and only answered his initial question.
I was mad at myself later for not speaking up. But I was afraid, at that time, that doing so might hurt my career (I am quite specialized, and in our field, many people know each other), because one does not simply accuse the head of a research institution of racism (especially as a lowly student fresh out of university).
So my question is: **how can racist (or any other offensive) comments from a supervisor / professor or otherwise superior be addressed?**
Because I really think ignoring such things is very wrong (even though I did it myself).
Just as added information: although they offered me the position, I declined.<issue_comment>username_1: This will be difficult to address formally, in a way that’s gonna have any real consequences for the supervisor if that is what you want.
The main reason is that you have no proof that this actually happened (I assume). If you try to approach anyone with this, the supervisor will probably deny it/say their comments were misheard/taken out of context. In addition, the severity of the offense is relatively low on the scale of infractions towards students so it’s unlikely that anyone will formally pursue this if you complain, unless there’s a pattern of past issues you’re not aware of.
Your best option in my opinion is to send a short and polite email to the supervisor, thanking them for the opportunity and explaining why you ultimately declined the offer. Explain why the comments were hurtful, and express your hope that they were indeed a result of them misspeaking. If the institution has some dean of student affairs/diversity, you may consider CCing them, but do be careful about how you phrase it.
Good luck!
Upvotes: 1 <issue_comment>username_2: My solution wouldn't be to attack the person in any way, not in real time nor later. I would probably be inclined to say (US perspective) "Actually I rather enjoy the multicultural environment that the city provides".
But if you need to accept any offer from such a place then you would be safest to focus on the work and the opportunities. Long term, you can work toward a world in which such comments are less acceptable and accepted.
And, you will find that same attitude nearly everywhere, even if unspoken. You have just gotten information that the person is bigoted and can avoid them as much as possible. Fighting from a position in which you have no power and no known allies is almost sure to result in a loss. Hold fire until it can be effective.
Upvotes: 7 [selected_answer]<issue_comment>username_3: I guess you were a prospective PhD student. Students, or prospective students, are not responsible for the behavior of professors. You *could* do something about it, but it is not obligatory.
Upvotes: 3 <issue_comment>username_4: You seem to be taking a fairly dichotomous view of this situation where you have to decide whether to “call out racism” or just sit there and do nothing. In my view, this false dichotomy comes from your underlying false premise, rooted in an overly broad conceptual view of what constitutes racism. You are also denying yourself and the speaker the opportunity to dig down into the underlying reasoning on the issue.
Let’s start by taking out the power dynamics here, and just pretend that you are talking about a remark by a speaker who does not have any particular standing in your profession. If you are really uncomfortable with a remark expressing misgivings about migrants, one useful approach would be to (politely) state your discomfort/suspicion with the remark and inquire into the reasons the speaker has for holding those views. If you frame this in a polite and measured way, it should not provoke conflict. For example, you could say, “When I hear remarks like that, I get a bit suspicious of the underlying reasoning, and I must say that it makes me quite uncomfortable. Would you like to give more detail on why you have misgivings about high concentrations of migrants?” Something like this expresses your discomfort with the remark without assuming it is motivated by racial animus, and it gives the speaker an opportunity to give details of their reasoning or rethink their position. If you can approach this with sincerity, confidence, and a measured and polite demeanour, it is likely to provoke some genuine consideration from the speaker (and if they *were* motivated by racial animus, they might be embarrassed about this and start back-tracking).
Now, let’s add the complication of the fact that you are talking to a high-level research in your field who is the head of an institute where you are applying for a job. In that case, provoking conflict probably has adverse professional consequences, so you are going to have to tread even more carefully. I may be naïve here, but I think that with most people you could still register your discomfort in a polite and measured way (e.g., like the statement above) and they would not hold this against you, even if they maintain disagreement with your position. Indeed, it is possible a person might be impressed by your willingness to disagree with an authority figure in a polite and measured way — they know they will not be getting a “Yes-Man”. (Like I said, maybe I am naïve and underestimating the probability of an adverse reaction; I guess I see the good in people!)
I hope that is helpful in expanding your ability to deal with this situation.
Upvotes: 6 <issue_comment>username_5: An accusation of racism is serious and should not be made without evidence. As the question stands, it is not clear to me the supervisor was being racist, and this needs to be clarified by directly asking the person in question before you make accusations.
"Migrants" are not a race and though Muslim may refer to broadly Middle Eastern people, there exist significant Muslim populations in Africa and South Asia. Your supervisor may be implicitly referring to one race of people who make up the Muslim migrants of the area, but he is not singling them out by their race, making concrete evidence of racism harder to provide, as he can claim you were misinterpreting him or taking his words out of context.
Your supervisor may be opposed to Muslim migrants on a theological basis, having nothing to do with race. In this case, by making an accusation of racism, you put your supervisor in an uncomfortable and defensive position that completely misinterprets his viewpoints, and engaging in a theological debate with a supervisor is likely not a good way to build a relationship (unless both parties desire to engage in debate). Similarly, your supervisor may oppose migrants from an economic standpoint, being opposed to migrants for example because they compete for local jobs or make use of more public resources (increasing the tax burden). This is due to their nature of being migrants and is independent of race, so again an accusation of racism is misplaced. Your supervisor may have multiple factors influencing his view, one of which may be racism, but it is not clear from the statement alone.
Upvotes: -1 <issue_comment>username_6: In that situation, I think the most diplomatic thing to do would be to say "Oh? Interesting" and change the subject. You ended up not taking the position anyway, but you could have done some "detective work" to see what other people at that institute think. For example, you could ask them general questions like, "Do people get along there? Is the atmosphere supportive?" Things like that.
Upvotes: 1
|
2021/06/17
| 833
| 3,508
|
<issue_start>username_0: I am about to enter into the 3rd year of college. I am pursuing a physics major, and I want to do some kind of project work under some professor. The problem is, we have been taught only about basics of quantum mechanics and statistical mechanics so I really don't know what professor to approach. I also want to read research papers so to understand some field but the problem is I don't know where to start.<issue_comment>username_1: I do my doctoral degree and I work a lot with undergraduate students. I think the best way is to get in touch with the PhD students and ask them if they need help.
I teach my undergraduate students the stuff my dissertation is about and they take measurements and little projects for my. For sure, in agreement with the professor. But I leand the PhD candidate is a good address to get in to touch with the working area you are interested in.
Upvotes: 1 <issue_comment>username_2: I hope there is some professor of Physics that recognizes and respects you. Even better if you have spoken with them in the past. But it doesn't really matter what their specialty is at this stage. Talk to them about what you'd like to do and ask if they, or a colleague they can recommend, would guide you in some project.
If they don't feel able to do it themself, then it would be good if they would interface for you with another faculty member, giving some informal recommendation.
But, since you are just getting started, your research/reading need not be at an exceptionally high level. You need the basics, as you recognize, to get going. Deep study can come later.
Upvotes: 1 <issue_comment>username_3: I am a professor of physics in the United States, and often get asked this question. If you are looking for research experience in your department, you should write up a short document that includes
* Your contact information!
* A list of your skills - programming, soldering, 3D-printing, whatever you bring to the table.
* A statement of what sort of research you are looking for: theory or experiment, how many hours a week, and whether or not you need to be paid for the work.
* A list your classes (you don't have to list grades or GPA).
As an instructor, if a student gave me such a document I would then shop them around to faculty who I knew worked well with undergraduates. If you don't have a faculty contact you can send individual emails, or put paper (!) copies in mailboxes of faculty members. Talk to seniors and graduate students to find out leads on who is a good mentor, and make an appointment to talk that professor about research.
Second, in the US, look for opportunities for Research Experience for Undergraduates (REU) programs across the country. These summer programs take in students who have finished their Sophomore or Junior year and pay them to do research on-site. The [National Science Foundation](https://www.nsf.gov/crssprgm/reu/) runs a bunch of them; as do other agencies.
Finally, in my experience professors understand that most undergraduates are not ready to step into state-of-the-art research, but still can contribute substantively to the research effort. We often view working with undergraduates as part of our broader effort of teaching. Some granting agencies view including undergraduates in research an important broader impact of the projects they choose to fund. In addition, many undergraduates *can* do publishable research and I've published with some.
Good luck!
Upvotes: 3 [selected_answer]
|
2021/06/17
| 3,072
| 12,992
|
<issue_start>username_0: I am a second-year PhD student, but I think this question may interest a broader readership.
I'd like to ask experienced researchers for some insights on how to navigate a prolonged stressful period.
For "prolonged stressful period", I intend a timeframe lasting for at least a year in which balancing work demands and personal life becomes challenging.
To provide you with my case, I'm approaching the last year of my PhD. So I got to relocate aways from my family and friends, work on my dissertation, contribute to several research projects, collect data, attend meetings and so on. Furthermore, I have a demanding advisor who, despite its sometimes overcritical mannerism and being almost a workaholic, cares about me becoming a better researcher.
As you can imagine, I cannot "say no" to many things. Thus, my personal life tends to be frequently jeopardized as workdays blur into weekends. I often have to work at least ten hours a day, and there are times when at the end of my workday (around 09:00 AM to 08:00 PM), I receive new assignments, sometimes due in a very short time.
I'd like to receive grounded, honest advice on how to deal with these upcoming months.
I acknowledge that these stressful times can be an excellent opportunity for me to become a better academic, the job I'd like to pursue in the future.
However, I am afraid because I haven't figure out the mental and emotional attitude I should adopt to navigate all these challenges altogether.
So, which is your honest, straightforward advice? How to deal with a prolonged stressful period?<issue_comment>username_1: You know your problem. You stated it yourself.
>
> I cannot "say no" to many things.
>
>
>
And as a result, your employer is asking you to work more, to the point you're not getting time to "recharge". Along this path lies burnout. When that happens, expect not to work for a period of time between four months to two years.
Obviously you probably don't want to hit that milestone, so take action now.
You must be planning out your time. If you are not, that's the first thing to do. Many people plan their time more effectively using a system. I personally think the Franklin Planner was an excellent system; but, it is a pen and paper system and a lot of people recoil in the horror of not using a computer.
Basically, you have a running list of things to do, and a calendar day broken into 15 minute intervals. Every day you add items to the "to do list" and every day you schedule about 15 to 30 minutes to "plan your day".
This plan involves going down the list and marking each item. "I" for "Important", "U" for urgent, "IU" for both. Then you take the "IU" items and you estimate how much time they take, and where you put them in your remaining time. Once all "IU" items are scheduled, move to the "I" items, and then fill the remainder of the day with "U" items. The problem is that we often fail to do the important stuff because the urgent stuff takes priority. That leads us to work in ways that we never make real progress.
Now, I've explained an entire system and have sort of skipped the answer you need, that's because this system is "I" important. I'll explain why.
You need time off to recharge. You haven't considered what this time off looks like, so you need to improve in planning it. It has been neglected for a long time, so it is clearly not urgent; but, it is starting to affect your desire to work so it is clearly important. You need to decide how much time off you require, and you need to schedule it just before you start allocating time to urgent items.
If you plan this, you can maximize it's impact by focusing on the kinds of activities you require. Everyone relaxes in different ways. If you feel you are near a breaking point, you might even consider your time off Important and Urgent "IU" which means it will be the first things you plan.
Aligning your behavior with your needs is the key to success. Urgent items often distract us from the important items, and it leads to a long term feeling of always being behind. You see, we can't forget the important items.
You'll notice that some of the items are not marked I or U. They are the non-important, non-urgent items. I highly recommend that if you can't fit them into the day, don't carry them forward to the next day. Drop them from your planner. Life is too short to fill your day up with non-important, non-urgent items. And please, stop considering "fun" to be non-important.
Good luck, it takes practice and discipline; but, it's a single approach to planning a more balanced life that works for many. Perhaps it won't work for you, but the practice of attempting to plan a balanced life will have benefits in any future approaches you attempt, so give it a try.
Upvotes: 2 <issue_comment>username_2: Going back to student days, I spent 45 years in Academia, including 25 as a professor. I've lived the challenges you describe. Some years back I took early retirement and these days among other things work as a coach helping professionals, including academics and students, deal with the sorts of issues you are describing.
There is no simple fix to the problem. The stress and anxiety that you are experiencing are probably mostly habits on your part. That is not meant as criticism; they are habits that you have been taught both explicitly and by example. (This is not psychobabble. There is a lot of recent psychological, behavioral, and neuroscientific research pointing to the importance of habit loops in all of our behaviors, including things like chronic stress and anxiety.) As above, there are no quick fixes. Dealing with stress means working to understand your own habits of behavior and mind and what triggers them. With that understanding in hand, you can then work on changing those habits. Good short term news is that simply seeing stress for what it is can provide some relief.
There are a few clear physiological effects that can be addressed straight away. The two big ones are exercise and sleep. Get in some amount of intense exercise a day, even if it's just 10 or 15 minutes of intervals on an exercise bike. Also take frequent breaks during the day when you get up and move around for a couple of minutes. Then set a sleep schedule and stick to it. BTW, the sleep is not wasted time. There is good research that shows that when tackling conceptual difficult problems and even memorization, those who "sleep on it" accomplish their goals faster and more effectively than those who just push on through. They also just feel better, which also adds to productivity.
If you are posting here, you are likely the kind of person who would appreciate a deeper insight into this stuff. Our experiences and emotions do not happen "to" us. They are constructed "by" us, and reflect the concepts that we hold about the world and ourselves. This rabbit hole goes extremely deep, into everything from evolutionary psychology to the very counterintuitive functioning of memory, to the role of the cerebellum in high order thought, to the neuroscience of Bayesian predictions by the brain. But you don't have to dig all the way into that to get the benefits.
A few people I would track down who discuss just how fundamental this is to we humans include <NAME> (author of "How Emotions are Made"), <NAME> (his TED talk "Your Brain Hallucinates Your Conscious Reality" is a delight), <NAME> (author of "The Power of Habit"), and <NAME> (whose company among other things offers good even if cheesy app-based support for changing habits including anxiety).
Social context matters. If your group spends a great deal of time bemoaning and reinforcing counterproductive perceptions and patterns of thought it will make them worse. Bitch sessions, among students, faculty or anyone else are likely to do more harm than good. On the other hand it can be extremely difficult (and typically impossible) to really tackle the problem without social connections that support you, explore the issues, and provide accountability. We are social primates and are not wired to deal with this stuff by ourselves.
A key attitudinal change is to place your own welfare at the top of your list of priorities. The tyranny of the urgent will pass if you let it. You are in this for the long haul.
On a very pragmatic front, time management skills are critical and often lacking. I recommend <NAME>'s "Getting Things Done." I'm not big on most self-help books, and Allen can certainly be a bit "rah rah" for my taste. But the organizational strategy/workflow he develops (Capture, Process, Organize, Plan, Do) is widely recognized for its effectiveness.
Finally I will note that having open and candid conversations with a boss or advisor can be extremely useful. Chances are what you are saying will resonate; in all likelihood the person you report to has had or continues to have a lot of the same kinds of problems. When someone hangs up a sign that reads, "The beatings will continue until morale improves" is probably talking to themselves as much as anyone else. That is also the person who can directly affect the conditions under which you work and help you better understand how you might be misinterpreting expectations.
Do be aware, though, that advice on stress management from someone with "Professor" in front of their name should be viewed with some skepticism. Given that there is an epidemic of burnout among faculty, be careful of the blind leading the blind.
Time spent as a student is an excellent (and usually relatively low stakes) opportunity to develop habits and attitudes that will serve you well throughout your life and career.
Upvotes: 2 <issue_comment>username_3: >
> As you can imagine, I cannot "say no" to many things. Thus, my personal life tends to be frequently jeopardized as workdays blur into weekends. I often have to work at least ten hours a day, and there are times when at the end of my workday (around 09:00 AM to 08:00 PM), I receive new assignments, sometimes due in a very short time.
>
>
>
That is far beyond the normal work expectations for a PhD candidature as set out in most university policies. Check your university PhD policy, but usually they will specify an expectation that full-time students commit roughly 36-40 hours per week to their program (i.e., roughly commensurate with a full-time job). Programs also have leave entitlements that are roughly commensurate with a full-time job.
In your circumstance, I would recommend you have a talk with your supervisor and let him/her know that you appreciate that they are trying to make you a better researcher, but that you are finding the work hours for your program to be excessive. Pass on the information you have given us about the hours you are working, and seek to negotiate work expectations that fit in with standard full-time hours. Ask your supervisor about your progress, and seek advice for whether you are progressing well enough to complete if you drop back to standard full-time hours. Unless you are behind in your program, it ought to be possible to negotiate reasonable work hours and still make adequate progress on your program. Your supervisor may need to be reminded of the expected hours in university policy (assuming this exists at your university) and encouraged to see work expectations accordingly.
Also, **don't be afraid to apply for and take leave**. I see some research students who grind through stresses for months or years on end without even taking their leave entitlements in their program. Most PhD programs give you four weeks of recreation leave per year (or whatever is equivalent to a full-time job in your country) and when you are on leave you should not be expected to be doing any work on your program. If you have leave entitlements accrued, I recommend you apply to take some of this leave. Make sure you hold the line and do not accept work while you are on leave; also, do not accept a higher workload when you return because "things piled up while you were on leave".
Most supervisors are reasonable people, but it takes practice for us to allocate the right amount of work to students (especially since research students vary widely in their abilities). Sometimes we overestimate what you can get done in a period of time, and you need to tell us if you are struggling. Most supervisors will want to build up a program where they are allocating the right amount of work to progress the candidature properly, but still within the expected work hours for the program. Talk to your supervisor and negotiate a good balance.
It sounds to me like work is being put on your shoulders precisely because you have not learned to say no to things. Learn to work within reasonable hours commensurate with the proper expectations for your program, take periods of leave, and say no to unreasonable allocations of work that go far beyond the program hours.
Upvotes: 0
|
2021/06/18
| 1,946
| 8,993
|
<issue_start>username_0: I do not know if this is one of the unpleasant consequences of online classes. Here is a typical scenario that plays out:
1. Students take an exam.
2. They complain if it is a multiple choice question or fill in the blank questions that they are not being provided partial credit for thinking through the problem and are being awarded only for the final answer which could be wrong even though the various steps leading to the answer were partially correct. Also, cheating in the exams are much higher than usual due to lack of easy proctoring mechanisms despite usage of latest online proctoring tools that seem easy to game.
3. If they are asked to scan and upload their answers as a pdf file along with their working, there is never ending stream of requests after grading to have their answer re-evaluated because they made some different assumption or that their answers are "partially right". I teach a math-oriented course where there is mostly only one right answer and questions are usually not susceptible to being misinterpreted. Yet, after sharing the grading key, students put the burden on the teaching assistant to consider their answers again and if possible award partial credit. Many times these turn out to be frivolous requests. Yet, it seems impossible to stop these frivolous requests to regrade and re-evaluate their entire answer scripts. Everyone involved in the grading process (the TA and me, the instructor) have limited time at our disposal especially for a large class. The fact that I am not physically meeting my students seems to have somehow encouraged students to keep pushing in pursuit of a better grade.
What are some techniques to stop this unhealthy habit? The TA and I spend sufficient time and effort to ensure that we are consistent in our grading across the entire class, but beyond a stage it is impossible to fine tune our grading to differentiate between different shades of wrong answers.<issue_comment>username_1: My suspicion is that most of these are problems with your exam design rather than having anything inherently to do with "online teaching".
* You can have multiple-choice in an online exam or in a normal hall exam. In both cases you will get the (justified) complaint that partial solutions aren't worth anything. This complaint is inherent to having MC questions, and the only way to address it is to not use MC questions. Conversely, an online exam can have regular open questions just like a normal hall exam. If you choose to have MC questions instead because of the obvious grading-related advantages you'll need to live with the consequences.
* You say that open questions lead to "a constant stream" of requests for re-evaluation (because the students claim they misunderstood the question). Again, I don't see how this is specific to online exams - if the same questions would be clear to the students in a hall exam, why are they not sufficiently clear in an online exam? And if they are not clear, why did the students not ask for clarifications during the exam (I am assuming there is a low-barrier way to ask for clarifications from the teachers in real-time, right?)?
* I do agree from my experience that the online setting somehow increases the amount of frivolous requests one receives, but there really is no mandate for a teacher to react much to them. If students keep sending you updates of their exam after the examination period is over, stop accepting such updates (you presumably also wouldn't accept late updates to a hall exam). If you would normally not let "but I misunderstood the question" stand as a valid argument, then use the same reasoning also in an online exam. And if you get substantially more such requests than normally, I would take a good hard look at the exam and wonder if they *are* just less clear than the exams you normally do (and then the obvious fix is to improve the clarity of your exams).
Ultimately, my impression is that you (maybe subconsciously) may have used the transition to online exams to make detrimental changes to your exam design (e.g., adopting MC questions rather than open questions, potentially providing less support to students during the exam, etc.), and your problems stem from this rather than from the medium. Doing exams online should not be an excuse to reduce the time that you and the grading team invests, otherwise you will run into issues (that aren't inherently the fault of the online setting).
Upvotes: 3 <issue_comment>username_2: I agree with username_1 that exam design is the first thing to consider. You need to be justifiedly confident that your exam questions are appropriate, sufficiently clear, etc. If your instituion doesn't have a systematic process for others to provide feedback on your exam design, you may want informally approach colleagues to get their opinions on your exams.
Making an exam completely multiple choice can easily lead to a student with good overall understanding of the topic to fail. A mix of multiple choice and open questions can be a decent balance between marking load and fair assessment. In any case, if you are certain that multiple choice is the appropriate format for some questions, then students may grumble a bit but this should cause any additional workload for you.
When it comes to marking open questions, it is again a matter of being justifiedly confident in your decisions. You should consider appeals based on the ground that you have made a mistake, but not on the grounds that you made a judgement call that could have gone another way. If your questions are clear to anyone knowlegable on the topic, then "I interpreted it differently" isnt a cause for reconsidering marks either.
Be very clear about your new appeals policy to the students. Let them know that you will only revisit an exam script if they explain how exactly they think the original mark was based on a mistake by the marker, and why. Stick to the new rules. Students will quickly see that they need to put in some effort now to get their exams looked at again, and that it only pays off in the rare cases where there actually is a mistake by the marker.
Upvotes: 1 <issue_comment>username_3: It is perfectly legitimate to give multiple-choice questions in an exam, and the standard marking approach for these has always been a strict right/wrong outcome where there is no partial credit for working. Usually the strictness of this approach is ameliorated by the fact that there are several multiple-choice questions and they are usually low-mark questions, allowing student marks to average out well in the long-run (i.e., students who have good partial knowledge tend to get more right answers over the long run than students who have no idea what they are doing).
I agree with the other commentators here that there is nothing inherently important about this being an online exam as opposed to an in-person exam. In both cases you should simply hold the line and tell students that there is no partial credit for multiple-choice questions. This is something that has applied to generations of students and we have all learned to live with the occasional frustration of getting no marks on a question where we had a partially understanding of the material.
There are a couple of practices that can be useful in reducing marking challanges. One useful practice is to give a general feedback session on the exam outcome prior to accepting any specific queries/challanges from students. You can then set out a general explanation of marking and expectations about what kinds of queries are reasonable. (I do this in a lot of my courses, and I have not had problems with excessive challanges to marking after these sessions.) Another possibility here is that your grading key might be too specific, and this might lead students to believe that they can evaluate the appropriate mark for an answer better than the TA. Sometimes less specific information on grading can be a benefit here, since there is an element of professional judgment in awarding partial marks for questions.
When I have a TA marking assessments in my courses, the main thing I am concerned about is that they are *consistent* in their grading standards for all students. If the TA turns out to be overly lenient or overly demanding across the board (relative to how I would have marked it) that is not a big problem for me. Students should be made to understand that there is variation in professional discretion in the awarding of partial marks, and that you have confidence in the TA to do this in a reasonable way. In the rare case where the TA's marking is beyond the bounds of what is reasonable (e.g., much too harsh for the year level), or if there is some other problem where all students were dealt with unreasonably harshly, you can usually compensate this by scaling the marks for the assessment item for the whole class; a better approach than trying to re-grade for individual complainants.
Upvotes: 2
|
2021/06/18
| 803
| 3,327
|
<issue_start>username_0: So I am currently a rising senior at a college in the US (which I will call "UNI"), and preparing to apply to grad schools. Furthermore, the college I currently go to (UNI) also happens to be my dream school for graduate school.
I have been working at a lab here at "UNI" as an undergraduate researcher in the field I want to go to graduate school for. I find the research topic very compelling, and as far as I know, the lab which I am working at is one of the only labs in the US focusing on this specific topic.
I would love to continue to contribute to research in this field for graduate school, but I'm not sure if I should be mentioning this in my application to "UNI".
I'm mainly concerned that it will be frowned upon by the admission committee because I'm leveraging connections (to the lab here at "UNI") which other applicants NOT going to this school won't have access to.
Furthermore, as far as I am aware the professors don't have any say in which applicants are accepted, so the professor in charge of the lab won't be able to put in a word for me either.
What would you all recommend I do? Should I emphasise this point in my application (I.e that I found a lab here which researches a pretty niche topic that I am very interested in, and that I have already done some undergraduate research for), and if so, to what extent? Should I mention it in passing or really emphasise it as the main focus of my application?
Thank you in advance!<issue_comment>username_1: My advice is that you should certainly mention it. I suspect that if you apply to some other university you would certainly want to mention your undergraduate research. You should do just the same for admission to your own graduate program. It would be expected, I think.
If this is for doctoral study in the US, note that professors do, in fact, get involved with admissions. Students are admitted or not on the advice of a committee that is largely (or wholly) faculty. Not everyone is involved in any given year, of course.
Other students at other universities who are applying to yours will certainly mention any research experience. You aren't taking advantage.
Your letters of recommendation might have extra weight since the committee members know the writer, but this also happens generally when faculty have wide circles of contact/collaboration.
Just. Do. It.
---
Admission to masters programs may be more pro forma with faculty less involved, but the same advice applies here. Someone needs to finalize the acceptance based on what is in the written materials as interviews are much less likely.
Upvotes: 2 <issue_comment>username_2: I am not in the US, but I don't think that mentioning this will be "frowned upon" as if you were having an unfair advantage in relation to the other candidates. You are not doing anything wrong or illegal by doing your undergrad research there: I actually think it is an advantage, as it shows you're motivated by the subject. If I were a part of the committee, I would see that as a very good thing.
Regarding the emphasis, I don't know about the other things you have on your CV, so I can't tell what is more or less important. I would advise you to mention it as an important experience that qualifies you for the postgrad program - be proud of it!
Upvotes: 2
|
2021/06/18
| 654
| 2,745
|
<issue_start>username_0: I am going to apply for a graduate program that requires background in subjects A,B,C. I have not studied those in my uni.
I self-learned them. The application process starts next January. So I have no time to audit these courses or get a RL from a professor .. etc. How to include this in my CV. Is it meaningful?
I have a youtube channel where I solve online final exams and HWs for these courses from other universities. Should I include the links on my CV? Or does it seem desperate?<issue_comment>username_1: Yes, you can include them, but it is unlikely that they will count for much unless you get a chance to answer questions about it in an interview. The problem is that there is no real independent verification of what you have learned.
You need to ask yourself whether you have really learned as much as if you had taken a course under the guidance of a professor, with evaluation of your work and feedback on what you have done. This is very hard to manage for self study.
If you have done research, with some output, based on your self study it would be good to focus on that as evidence that you know what you say you know.
But a section of the CV on self study, listing books you have studied from or online courses you have "taken" might be worth the effort.
Unlike a commenter here, I don't think that the SoP is a good place for such things. The SoP should be about your goals, not about your past. A very brief statement about how some online study has pushed you toward a goal would be fine, but spend the "words" of the SoP focused on the future.
But, if your undergraduate study is non standard then you want to write things in such a way as to get an interview in which you can be queried about what you really know. And, without letters of recommendation from professors it will be especially difficult.
Upvotes: 4 [selected_answer]<issue_comment>username_2: Yes, it would be better to have some very-tangible evidence of your self-study, for example to have some faculty vouch for you... but, as we know, this is not always possible.
From a U.S. math perspective: simply tell the textbooks (with authors!) you've read, and on-line notes you've read, etc, without much further comment.
From my viewpoint, as a long-time admissions cte person for a grad math program in the U.S. at an R1, I do realize that not everyone has the opportunity to take formal classes ... Whether they do or not, the interest and initiative-taking that self-study demonstrates is a big plus in my assessment of people. Yes, being outside of the conformist milieu means that people will need a bit of time to get in sync with the styles and conventions... but I don't care much about that, in fact.
Upvotes: 1
|
2021/06/19
| 3,739
| 15,739
|
<issue_start>username_0: So, this is an online test and some of my students have identical answers (including mistakes and typos!). My plan is to send an email to the class giving the cheaters a chance to come forward for a reduced penalty (scale down the grade based on the severity of the cheating). Otherwise, they will be reported to the university.
Is this an appropriate action? Any other/better suggestions?
**Update:** I reported the cheaters (~40% of the class). I believe it is the right thing to do given the reasons below in the answers. It is a bit disappointing though that many students have cheated (including who I believed were good students).<issue_comment>username_1: Punishments for academic misconduct should be standardized across your university. Check the university policy and follow it. If you are still unsure, ask your academic dean.
Instructor discretion can lead to inadvertent or unconscious discrimination.
Upvotes: 8 [selected_answer]<issue_comment>username_2: The "cheaters please come forward now" scenario would require a lot of trust in the current (and possibly the *future*!) misconduct policy... maybe too much. And some cheaters *might* come forward, while others might decide to keep quiet. How would one handle that?
And report *what*? You cannot prove beyond doubt who was copying and who was authoring answers. And the author might have been cooperating, but might also still be completely unaware of the problem. Therefore I'd suggest this:
* a unique and correct solution is worth full marks.
* any identical submission appearing *N* times is worth 1/*N* each.
This addresses both sides: the author (lesson: "don't provide knowingly or accidentally a solution to others, it will lower your marks"), and the cheaters (lesson: "a copied solution isn't worth much").
If someone is unhappy with your suggestion, offer to forward the problem to the place which usually handles academic misconduct at your university.
Upvotes: -1 <issue_comment>username_3: I'm not going to pretend to know the "right" ethical and practical solution to your situation. I've never experienced this as an instructor before somehow. But the way I see it is that you have two options:
* follow the ethics code exactly as written by the university; or
* attempt to resolve the issue "internally" (i.e., keep it separate from the official disciplinary processes). this involves punishing the transgressors equally and befitting the severity of the misconduct (such as giving everyone involved a 0 on the exam), and then making it abundantly clear to the class, in a formal statement in front the entire student body, that cheating occurred in this specific way: if any student cheats again, they will face the possibility of an automatic failing grade for the class and possibly expulsion pursuant to the college code of conduct.
Obviously, the second one is trickier to do correctly, but more forgiving. So there's that trade-off. And in addition (this is going to sound disconcerting) it's probably safer for you to go with that possibility if you're tenured, in the event that something goes wrong with your internal solution (such as students claiming undue punishment, unfair or unequal treatment, or some kind of bias, even along racial lines or gender lines (I hate to bring these up, but accusations do happen, some of them true and some of them false)).
The key is to have incontrovertible proof that each person getting punished committed the "crime". And then punish them all identically. And finally explain exactly what happened to the entire class; the only thing that you need to omit is the names of the students who cheated.
---
As an aside, I want to tell a story of something that happened when I was a sophomore in Proofs, Induction, Set Theory, and Arithmetic class. A student posted a question from a homework assignment onto math.stackexchange.com – or maybe it was simply Stack Overflow or Stack Exchange at the time – and received answers. Viable answers. I was not aware of it. Several students copied the solution. It was a course / problem in which dozens of unique solutions could successfully evaluate in a sufficiently rigorous way. As such, it was obvious to the TAs and professor as to who copied the solution, at least to a certain approximation and confidence level.
The professor opened with a direct announcement the next lecture. He claimed he knew the full list of students who cheated (this may have been a little game theory and behavioral manipulation at work). His gambit was this, involving three possible outcomes:
* the cheaters could reveal themselves immediately and apologize to the class for compromising the integrity of the course, and receive the least punishment
* they could speak to the professor after class and receive a moderate punishment
* they could do nothing and receive the worst punishment
I think they did the 2nd of those choices. But I can't be sure. All I know is that they didn't openly admit to the class. I don't even think he expected them to; it was partly a psychological game to get people to approach him afterwards.
I'm not saying this is the best way to handle your situation, but it was a memorable circumstance.
Upvotes: 1 <issue_comment>username_4: Being a professor is hard work and requires one to develop expertise and make thoughtful decisions about many different issues. Fortunately, modern universities have taken one area — the handling of student misconduct — out of the hands of individual professors and created a way to treat it in a uniform way across the entire university (typically through a dedicated unit with a name such as Office of Student Misconduct). This creates obvious efficiencies and frees up professors’ time to handle the work that they are actually expert in and that only they can do.
By coming up with your own policy to punish teaching students, you will be:
1. Wasting your own time and mental energy on making decisions that others have spent more time thinking about, have more contextual information about, and are more competent to handle.
2. Running the risk that your policy will differ from the university’s policy, creating a source of unfairness and inconsistency.
3. Depriving the university of a record about the cheating students that might inform decision-making in the (near certain) event that some of them might be caught cheating again in the future by other professors.
These are the disadvantages of your approach. As for advantages, the only one I can think of is that the university-wide office for handling student misconduct is in some campuses seen by some professors as either inept, incompetent, needlessly strict, or needlessly lenient, and this creates a temptation for those professors to handle misconduct matters themselves — a kind of “vigilante justice”. I don’t know if this is your situation. But, if you don’t have information to suggest that your university will not handle the referral in a satisfactory manner, this argument doesn’t apply.
Upvotes: 5 <issue_comment>username_5: The correct course of action is to follow the university procedure.
1. You do NOT want to stray from the university procedure as it would allow students to challenge any decision made in the case. Moreover, it could also expose you to technical administrative actions should the university become aware you did not apply properly the institutional policy.
2. It is not clear that it is for you to decide if there is an academic misconduct. Certainly here it’s the job of the Dean or assistant Dean to verify and assess such allegations. You can accuse students of plagiarism, but you should not be the jury in the business.
3. It is not clear it is for you to decide the penalty. It may be that the students have prior offenses that you cannot know of because of confidentiality. If the allegations are upheld, someone else should hand the sentence (albeit here there is consultation between the Dean and the instructor).
This is NEVER a pleasant situation, so tread carefully, keep all correspondence, and let the process follow its course: this will ensure greater fairness for all involved, and will give confidence to all that such situations are dealt uniformly and not in an instructor-dependent manner.
Upvotes: 4 <issue_comment>username_6: As the lead faculty member at my college for anti-online-cheating efforts, I think the OP's initial plan is too lenient, and secondarily too subjective.
OP's initial plan:
>
> My plan is to send an email to the class giving the cheaters a chance
> to come forward for a reduced penalty (scale down the grade based on
> the severity of the cheating). Otherwise, they will be reported to the
> university.
>
>
>
OP's motivation for this as per a comment:
>
> ... because this is the first offense (it is the first exam though),
> so I don't think jumping to reporting them is a good action. I may be
> wrong.
>
>
>
In line with other answers, I do think the OP should follow the standard institutional policy for academic integrity cases as soon as possible. In addition to the prior reasons, I would add this:
If these are university students, then it seems to me overwhelmingly likely that what's happened is a reflection of prior habits they've been following for... maybe 12+ years now? I'd say at this point it's naive to think this is truly "the first offense". What if these students are cheating at work in every single one of their college courses, and then pleading "first offense" or "didn't know" (very common, and should be disregarded as utterly unbelievable), and so are given this allowance continually throughout their program sequence?
My broad guess is that they've probably been given many "first offense" allowances over time, they've been either unpersuasive (or worse: evidence there is never any real penalty), and as a university instructor who cares about academic integrity (you've already spent the time to investigate this!), it's time to apply the putative penalty, so as to get the actual message across.
Moreover, as others have stated, the central Academic Integrity Officer is likely to maintain university-wide records, and decide or recommend increasing penalties for students who have had prior reports filed. (At my school, the available reports actually span 25 different campuses in our university system, in which transfers are common.) Having every instructor silo their own "first offense" process short-circuits that mechanism.
For these reasons, under the assumption that university students are expected to be previously aware of the rules (perhaps by reading syllabus information or other student materials, etc.), and also fairly long prior academic experience, I recommend assessing roughly the harshest penalty for cheating on tests possible. In my case, the default is a zero in any such case. I've found that assigning this as a "pending" null grade to the assignment makes students much more prompt about responding to cheating investigation inquiries (which otherwise go unreplied or "ghosted" in many cases). Perhaps more importantly, the OP's initial "scale down the grade based on the severity of the cheating" idea sounds vague and likely to result in bias or irregularities if they don't have a specific expected level of sanction decided in advance.
Also, I would generally avoid sending any course-wide messages out about the situation. That's because: (a) it's probably irrelevant, and possibly confusing, to the majority of students, (b) it fills the course mental space with negative chaff, and (c) it's probably a sign of instructor laziness, in that they couldn't bother to send a message to the specific students under investigation, which is the appropriate thing to do.
Upvotes: 4 <issue_comment>username_7: The key is that you're *assuming* this is a first offense. You don't know. Maybe they got caught cheating in every other on-line exam and they all get a verbal warning for a 1st offense, then again next semester. There should be a central academic dishonesty person who handles these things, and is good at it. They will know if it's a 1st offense.
Double-check you have a solid case (identical spelling errors is good), make notes, ask the students to come see you in a way that isn't too terrifying, figure out if everyone copied from Alice w/o her knowing, or whatever. Write it up with names and dates and the class name and what they said, and send it to the Academic Dishonesty person. Tell the students that for a 1st offense they will get yelled at, but nothing more. Mostly it gets written down (not on their diploma or anything like that, only for their time in college) so if it happens again they can't say "but I didn't know!" If they ask if it will show up for security clearance checks, it will, but it's no big deal. So as not to keep them in suspense, let them know it's your decision as far as grades go, decide ahead of time, and let them know if they confess (otherwise you have to wait. Most students eventually confess). Check their grades -- they may have been failing anyway, so it hardly matters. Let them know that if they retake the class with you, you won't take it personally (you're a busy person and don't have time to remember who did what last semester).
But don't believe me. See if you have an Academic Dishonesty person. They probably have a hand-out or something, or it's on the web page.
Upvotes: 0 <issue_comment>username_8: It may be a good idea to confront them with your findings and ask for their explanation. There just might be a reasonable one (I can't think of one, but...).
You do not give them a way out, but follow school procedures.
If they willingly cheated, they know also fully well, they have to face consequences.
They may be dumb, but not THAT dumb, being allowed to study at a uni.
You cannot cheat "a little", btw.
Upvotes: 0 <issue_comment>username_9: At many (most?) schools, faculty and students are expected to report *all* cases of suspected cheating they observe. You're expected to report everything because it's not your job to decide the case. That's the job of the academic conduct officer or perhaps an honor council. Your job is only to report what you see. If you later become aware that your suspicion was completely unfounded, many (most?) schools will allow you to retract your report with an explanation.
If you're faculty and reporting a student for suspected cheating, I don't think it's necessary to notify them *before* you submit your report. (Again, it's not your job to investigate or decide the case.) But I think it's good practice to notify the student that you have reported them and to provide a copy of the evidence you submitted (but redacted of any other students' names or identifying information, e.g., if the evidence is two identical exams) along with a link to a university academic policy page (if there is one) explaining the process by which the student's case will be decided.
Upvotes: 0 <issue_comment>username_10: As suggested by the answers here, you should report that ! wait, you report what ? first degree cheating ? what about 2nd degree cheating and 3rd degree cheating as well ? the recurrent mistake made by professors, is that they only catch the easy prey while omitting the most wicked cheaters.
Cheating has many ways, you could cheat by sitting and preparing the exam with your mate, you could do that in group as well (while many others prepare the exam alone (which is penalizing them)). Yes i consider that cheating in comparison with lonelisome students who do their hardest to prepare ALL by their own. Anyway you are only in doubt and that doesn't constitute a crime for the "cheaters".
Upvotes: -1
|
2021/06/19
| 1,058
| 4,469
|
<issue_start>username_0: So here is the issue. In my masters project, I have worked on a problem and am hopelessly stuck on a mathematical equation that I cannot solve.
I am a non-math student who uses math as a tool. I have tried several ways that I could to solve the equation, but I have failed. I would like to approach a mathematician with the problem. But my guide is not highly accepting of the idea. He is generally not enthusiastic about asking for such helps. How do I politely convince him to do so?
And how does one go about with such a situation? Do I simply find a math professor online and email them asking them to help? Instead of requesting my guide, can I directly email other professors asking them to help? Do people actually help or just ignore such emails?<issue_comment>username_1: I'll guess that the best help might come just by visiting the math department at your university and asking a faculty member for help. Or asking (say at the the department office) if there is a grad student that might be willing to help you.
It would be appropriate to give an acknowledgment in the thesis, or any resulting paper, to someone who helps. For some kinds of help it might actually be appropriate to pay for the help. But, as long as the driving ideas in the thesis/paper are yours, then co-authorship by another isn't really appropriate.
And make sure your advisor is comfortable with all attempts to seek outside help as long as you are a student.
Upvotes: 3 <issue_comment>username_2: First, there's nothing wrong with you asking for help. I find it strange that your master thesis mentor objected. Indeed, master thesis is intended to teach you how to do research, and collaboration (or simply talking to experts in other fields) is a crucial aspect of it nowadays. You are not supposed to re-invent the wheel!
Generally, many mathematicians are fond of explaining maths to others, and seeing a problem someone cannot solve, they often cannot resist a challenge! So, if you ask for help, you do have a chance. Much more so if (a) ask someone you know in person and (b) ask someone working in a right domain. Alas, we are not generalists anymore, so don't an expert in category theory to know PDE or be willing to learn them for you! If you do shoot an e-mail to someone you don't know, it's hard to beat the [advice of <NAME>aronson.](https://www.scottaaronson.com/blog/?p=304#comment-8966)
Actually, if a question went unanswered on math.stackexchange, it is considered acceptable to ask it on MathOverflow. By the look of it, though, MO welcomes more concrete and precise questions than yours on Math.Stackexchange. You should state the problem and what you are trying to achieve in full, refer to "most similar" problems you know of in the literature, and what exactly goes wrong if you try to mimic their method. Since MO is supposed to be "like asking your colleague next office door", the same applies if you ask/email someone.
On the issue of collaboration, I would say that if you asked for help and received valuable input, then you should offer a co-authorship. After that, it depends. If the answer is in the literature and they just point it out to you, or if it is standard for the experts even if hard to pinpoint in the literature, then they are supposed to refuse, and you just mention them in the Acknowledgements section. If, on the other hand, they start thinking or doing computations for you, then it's a different matter. Of course, there are many intermediate possibilities, e. g., they just write the math part in a separate paper, and you cite it, or they write a separately authored appendix to your paper. But it's up to the person whom you are asking for help to decide.
Upvotes: 4 [selected_answer]<issue_comment>username_3: Ultimately, if you are unable to solve the problem yourself then you are going to have to ask for help, or even collaboration. If it is a large enough problem that you need a collaborator to do a chunk of the work for you (e.g., solving this mathematics problem), that should not necessarily be fatal to it still being counted as fulfilling your project requirements.
Start by seeking help from some mathematics people at your university, and if it is a big problem then you can raise the opportunity for collaboration and coauthorship. Your "guide" (supervisor?) should be able to give you some options here that make it possible for you to get a solution to the problem.
Upvotes: 2
|
2021/06/19
| 1,305
| 6,028
|
<issue_start>username_0: hoping some scientist academics out there can provide insight into a question I've been ruminating on for a while.
I spent multiple years working on a large computational data set, and it was published in a journal this year. There were millions genes and many interesting patterns in this project. I am now working on another large data set, collected with different organisms, and going back to my original code to optimize and try out new methods for data exploration. In doing this, I discovered better ways of carrying out my original analysis. I wanted to understand how these modifications changed the biological outcomes in my already published study. In doing so, I discovered a pretty interesting gene that I missed the first time around, that was overall not very abundant in the scheme of things, but definitely responsive. It's a part of a module that I discussed during a section of this paper. It doesn't change major conclusions, but if I could, I would rewrite a part of the paper to take into account this missing gene. The concern is that I will lead people astray who have a very specific interest in this gene process.
My question is, how should one handle this? Do I correct the paper? If so, do I merely mention this gene I missed or re-do the entire analysis using an updated (and better) pipeline? I have not seen this done in practice, and the corrections/errata I see are due to technical and specific errors, not to add in extra information. It seems like a Pandora's box because I now know so much more than I did the first time I did the analysis, and could probably keep updating every time I find another new important gene that was previously missed or a better way of carrying out the analysis. Another option is to write a new paper expanding on this subsection.
I can't tell at what point I'm crossing the line from being a diligent scientist to obsessing over every detail. I know things like this must happen to other people, but I don't hear of them being discussed. Is anyone else concerned about how scientific methods (especially computational ones) and our own data analysis skills improve over time, and how this influences previous results? How do others deal with this in academia?<issue_comment>username_1: If I understood you correctly, what you are describing is really a new research result. You designed a new method, and using that new method on old data you discovered a new result. In retrospect you realize that you could have seen a hint of the new result with your old methods, but it was not obvious without your new way of approaching the problem.
A correction should generally be for an error that invalidates some result or conclusion. Here is a definition from the Physical Review style guide (this is a common journal from my field -- I realize this is isn't directly relevant for genetics, but I think the principle should be universal and it gives you an idea of what to look for in the journal you are interested in publishing in):
>
> Errata: The Errata section contains notices regarding errors or omissions in papers previously published. Besides standard Errata, other categories of documents may appear in this section. Each has bidirectional links between the original article and the document in the Errata section. The category of the corrective document is indicated in its title and in the link from the original article. The standard Erratum is a statement by the authors of the original paper that briefly describes the correction(s) and, where appropriate, any effects on the conclusions of the paper.
>
>
>
In this context I would interpret "omission" to mean "failing to mention an important piece of context" (eg: a reference, a crucial detail in the methods, a counterexample to a trend reported in the paper), and not new research on old data. There are several major disadvantages to publishing a correction, rather than a new paper: (a) most people probably won't notice the correction (far fewer people compared to a new paper), (b) the people who do notice will probably assume it was a *mistake* (which is neutral or negative), as opposed to what really happened, which is a new discovery enabled by improved methods (which is a clear positive).
If anything I think you should actually be excited about your result and want to tell people! I can think of two common approaches to this kind of situation:
1. If the result by itself is interesting enough, you could write a follow-up paper which describes the new result.
2. Otherwise, as part of a followup paper describing the new method applied to the new data set, you can have a section entitled something like "re-analysis of old data set" where you show how your new method enables a new insight.
In either case, I would suggest you describe both how you discovered the new insight with the new method, and how you could have seen hints of the result with the old methods (but it would have been harder). People like stories, and it sounds like this will tell the story of an interesting result and the power of your new method.
Upvotes: 2 <issue_comment>username_2: What you are describing does not sound like a correction to me --- it sounds more like something where a follow-up paper would be warranted. There are many cases where an initial paper is published with an analysis of data, and then later follow-up papers use different models, account for additional variables, etc. If you think that the new analysis you are proposing is better, and adds to knowledge on the topic, I would say that this is a good reason to write a short follow-up paper with your new analysis.
Upvotes: 4 [selected_answer]<issue_comment>username_3: When you find new information which adds a small amount to work that is already published, that new information should be published in a "comment" or "matters arising." Unfortunately, some journals do not publish such comments or matters arising.
If you use arXiv you can just update your preprint.
Upvotes: 2
|
2021/06/19
| 1,047
| 4,377
|
<issue_start>username_0: I am applying to a Christian University, and they have requested that in my CV I include my religious background. I have no issue with doing this, so I plan to simply make the appropriate additions to my existing CV. However, I'm not really sure what I should include. Obviously, only I know my religious beliefs and background, but I'm not sure if they are asking for things like church attendance, or a very brief statement of faith so any suggestions would be appreciated.<issue_comment>username_1: I suggest that you read the web site of the college to see what they say about themselves and their expectations. I know that some demand that faculty follow the same principles that the college associates itself with. In this case, if you don't adhere to their beliefs it isn't really worth applying.
Others are much more tolerant of the views of others. At the extreme it might even be that diversity in the views of faculty is welcomed. The college may have a clear "mission" but not demand that everyone view that in the same way.
But, you will learn a lot, especially in the extreme cases from their web site. Some are very clear about expectations.
If they seem to be less stringent in their demands, then it may not matter much what you say or whether you say anything at all. But, since they ask, I guess they have expectations.
Upvotes: 3 <issue_comment>username_2: I think they are asking about membership. If you are a member of a religious organization, add that as a line to your CV.
Example:
>
> Member, First Church of College Town, State, Country.
>
>
>
If you hold some sort of leadership role in the religious organization, such as Elder or Board Member, that might also be listed in the CV.
Edit: This assumes your field of research is not religion.
Upvotes: 3 <issue_comment>username_3: In addition to the answers given by others, let me suggest one additional tactic: **read some CVs of current faculty.**
People who have already been at the university for a while have clearly been successful in getting jobs there and figuring out the expectations of the institution. Some of them will have a web presence, and some of those folks will likely have a CV of their own posted. If those CVs have information about religious background, then looking at half a dozen or so will give you a good idea about the sort of things that people at the institution think "religious background" should look like on a CV.
In particular, I would suggest looking for the CVs of people who have been there for a range of about 3-10 years. If they've been there for at least a few years, they're unlikely to be a misfit who would be a bad model for you to use. If it's within the last decade, then it's more likely to reflect the current practices, as opposed to somebody hired long ago who may or may not have been hired under different institutional cultural expectations.
**Addendum:** after writing this, I noticed a comment by the OP that none of the current faculty include this material in their online CV. This makes the requirement all the stranger, and based on that I would recommend just straight up asking the level of depth that is desired, proposing a couple of potential levels that you'd readily know how to give, e.g., "list of churches I've attended" vs. "paragraph explanation of the role of faith in my life."
Upvotes: 5 [selected_answer]<issue_comment>username_4: They are asking about your social contact with Christianity, were you baptized, did some services in/for a church or related organization, and so on.
It also hints, what do you believe.
The least useful if you have nothing. Being connected to another, or maybe even historically antithetical denomination, is probably still positive.
Note, Christianity is valid even to those who do not believe it. I.e. your human values are much more important than your denomination, and discriminating people for being non-christian is no-go. They probably try to detect these, and not (only) your denomination.
In your case I would say this: "baptized as (denomination), not very diligent but practicing observer and believer" or some similar. This would be a short part at the end of my CV (roughly in the place of the personal things/interests).
That your application would be refused only because what you write here, that is very unlikely.
Upvotes: -1
|
2021/06/20
| 1,190
| 5,536
|
<issue_start>username_0: Whenever we start new research, we need to do a literature review first in order to both get acquainted with the context and the concepts related to our research as well as to situate our research within the existing knowledge.
The problem is that in order to find relevant literature, you need to express what you are looking for in meaningful search terms. The more efficient and robust you make your "search string" (aka the keywords you use) the better you will express your information need and find relevant references. If someone is looking for "text classification for categorising books", it is better to word it as "book genre identification".
Are there any techniques for finding the perfect wording for your initial literature search? Myself, for example, I try to find an initial paper that is relevant and the browse through its cited papers to see if there is any paper that describes what I am looking for in a better way.<issue_comment>username_1: You don't need "perfect wording for your initial literature search". Getting started on a new project is an iterative process. You think about the problem. You read some papers. You think some more. You find new places to look. You have ideas of your own and follow them up. Eventually you will discover things that are not yet known.
When you are pleased with your progress you do the literature review that's appropriate for the paper you are writing.
Upvotes: 4 <issue_comment>username_2: There is no one approach. Presumably, you know enough about the topic to find one seminal paper that's been heavily cited. You then look at the heavily cited papers the cited your first paper. You then go read the other important papers that those authors cite.
Long story short, pick an interesting paper, and search forwards and backward in time from that point.
Upvotes: 3 <issue_comment>username_3: As @EthanBolker said, a good literature search is an iterative process that evolves along with your research progress. The very notion of a "perfect search string" from the very beginning constrains you to think only in terms of what you knew when you started the project without adjusting for your continuous learning as you advance in the project.
I would further add that it is a widely mistaken concept that the primary purpose of the literature review of a scholarly article (which I call a "background literature review") is to summarize the related literature. Well, that is often accurate in practice, but in my opinion, that is not the most useful thing that it might do. **Rather, the primary purpose of a background literature review should be to clearly carve out the novel scholarly contribution of your article in relation to the literature.** That is, by the time you have completed the article, the background literature review should focus on clearly showing how the existing literature is related or similar to what you are doing, but how your article goes beyond the existing literature and makes novel scholarly contributions beyond what exists.
You cannot write such an effective background literature review at the beginning of your project when you yourself are not quite clear what is the extent of your contribution. Although it is certainly good to do an initial search before you start so that you have an idea of the research landscape, it is only by the time you are finished and you can properly appraise what you have done that you would be able to complete a background literature review that properly places your contribution in context of all relevant literature. So, from that perspective, there cannot be a meaningful notion of a "perfect search string" at the beginning of a research project.
I think what you have described is a good way to start: "I try to find an initial paper that is relevant and the browse through its cited papers to see if there is any paper that describes what I am looking for in a better way." In addition, you should probably use Google Scholar to do a forward citation search, that is, find articles that have cited your initial papers and see if something more recent is related. Then, after you have completed your project and have a good idea of what your contributions are, you can try to search on the keywords based on your contributions to find if anyone has already done anything similar. Then read such articles to clearly understand how your contributions are different so that you can then write a background literature review that clearly highlights the uniqueness of your contributions.
Upvotes: 2 <issue_comment>username_4: Presumably you will want to start a new type of research after:
1. You met someone that does some research that interests you
2. You read a paper that you liked and it sparked a new interest
3. You saw a talk about a subject that you liked
If 1. just ask that person, if 2. read carefully the introduction of said paper and list all the references cited there. Then go to those papers and read those introductions and do this iteratively. After a couple of iterations you will see some papers appear all the time, those are the seminal papers. Read those, then you have an idea what to search for.
If you are lucky, somebody has already written a review, so it's almost guaranteed that after reading a couple of recent introductions it will pop up there.
If 3. you can look for papers that they wrote and start from there. Perhaps you can even ask them to give a talk in your university (or ask a professor to invite them).
Upvotes: 1
|
2021/06/21
| 622
| 2,759
|
<issue_start>username_0: In science in the US, a large fraction of the available jobs are temporary, and typically scientists take on temporary jobs (usually called postdocs) earlier in their careers. Why is this the case? Anecdotally, most occupations have a smaller fraction of temporary positions.
I'm interested in both answers about the historical events leading up to the status quo and answers justifying the status quo as a desirable situation.<issue_comment>username_1: Many of these positions are "soft money" positions - that is, they depend on grants. Since grants are temporary, the jobs they fund are also somewhat temporary. That said, this doesn't tell the whole story since temporary positions typically have a shorter term (for example, a year) than the grants that fund them.
For post-docs, it's a bit different. Post-docs are officially "training" positions - they are meant to be a step in the development of a scientist, a stepping stone towards more independence. Both employers (that is, universities) and funding agencies often have limits on how many years someone can be considered a "post-doc" because the idea is that this training should be temporary.
You could argue that the grad student to post-doc to professor track follows the apprentice/journeyman/master structure in the trades (I'm not certain whether it was explicitly inherited/motivated from that system, though).
Comparing academic to industry jobs in the US, while there are certainly differences in the hiring schemes I'm not sure they're actually all that different. "Permanent" jobs come to an end all the time as employers go through cycles of growth and layoffs. In most (probably all?) US states it is far easier to end someone's employment than it is in other countries like the UK or Germany.
Upvotes: 3 <issue_comment>username_2: It's a myth that "science jobs are temporary." The reality is that all jobs are temporary. Employers go bankrupt and employees die, but more frequently employees find a better job and switch. In the US, many employers are able to terminate employment at any time, which is less secure than having a one year contract with a fixed end date.
Many science jobs have a fixed duration of one to two years because many employers have one to two years of funding available. If the funding available increases, the job duration may be increased too.
The current situation is desirable from the point of view of awarding grants: If the funding is temporary, you can decide not to renew it when the awardee performs poorly. From the point of view of scientific staff, longer employment contracts would be better.
Short contracts will continue to be offered to scientists as long as scientists continue to accept them.
Upvotes: 0
|
2021/06/21
| 796
| 3,396
|
<issue_start>username_0: I am a Masters student in math and thus new to academia and publishing in general. I am a coauthor on a number of recently submitted papers and had the (somewhat intrusive) thought, what happens to papers that get rejected? Of course, the simple answer is that they get corrected/improved based on referee reports and resubmitted somewhere else, but are there cases where papers simply never get published or stay in limbo forever? In these cases, what happens to the results/theorems/proofs they contain? (My question was also partially motivated by the fact that in one of our papers we cited a preprint from the 90s, which has been cited dozens of times but does not seem to have ever been published in a peer-reviewed journal.)
On the one hand, publishing in peer-reviewed journals is pretty difficult, and I'm sure papers are more often rejected than accepted. So there are bound to be papers that never end up making the cut. But on the other hand, looking at various researchers' academic websites, all of the papers in the "submitted" category seem to be pretty recent, suggesting that all of their submitted papers from before, say, three to four years ago ended up published. Or perhaps it is common to "silently" remove an in-limbo preprint from one's CV after a certain period of time?
I guess all of this is a long-winded way of asking: "What proportion of papers are *eventually* accepted?" Like I said, I'm in math, but answers regarding other fields would be interesting as well!<issue_comment>username_1: As long one tries hard enough long enough, there's always a journal out there that will publish a paper (assuming the paper isn't absolute nonsense). That's because there are *so many* journals out there it becomes statistically improbable that they will all reject. You could draw an analogy to university admissions. It's true that universities reject more than accept, yet it is very improbable that someone who writes 100 applications will be rejected from all of them unless they are woefully unprepared. In academic publishing, there's the added advantage of being able to amend the paper in response to the previous referee reports before submitting to a new journal.
That's not to say that every paper that's prepared eventually ends up published. It's possible the authors stop caring about the paper. These papers will indeed end up in limbo. The odds are their results won't be very important, because the authors after all stopped caring. One reasonably common scenario would be that the results in the paper have been decisively superseded by another paper that was published while the paper was in preparation. For example, the discovery of the Higgs Boson in 2012 rendered all papers that assumed the Higgs didn't exist obsolete. I am not aware of any examples myself, but I imagine these papers simply died and were forgotten.
Upvotes: 5 [selected_answer]<issue_comment>username_2: There are well-known papers in math which have never been published. One of the most famous such papers is [this](https://arxiv.org/abs/math/0406514). As far as I know that paper has never been rejected but the referee(s) requested revisions which have never been made.
Upvotes: 5 <issue_comment>username_3: Of course submit to another journal.
If all your papers are accepted by the first journal you try, then you are aiming too low.
Upvotes: 2
|
2021/06/21
| 1,394
| 5,485
|
<issue_start>username_0: I have a quite large table (45 columns x 15 rows). Columns include comparisons groups and each row represents a measure. I can only think about putting the table on multiple pages but this does not make it easy to compare groups' estimates that are on different pages? What are good practices for such cases?
**Option 1: multi-page table?**
Table 1, Table 1 (continued 1), Table 1 (continued 2) ...
**What are other good options for formatting large tables?**<issue_comment>username_1: There is no need to reproduce your entire raw data in your thesis (or any other publication). Your publication should describe some abstract properties of your data, discuss your analysis, and present your results. As a rule, if a table spans more than two adjacent pages, it is too large. Ask yourself: What is the point I want to get across by presenting the data in the chosen format? What do I need to focus on to get this point across more efficiently and without distraction?
If you want to make your data accessible, which at some point could be a good idea, consider uploading it to a [research repository](https://academia.stackexchange.com/questions/987/data-publication-basics-where-why-how-and-when-should-i-publish-my-unpublis) or publishing it in an (online) appendix.
Don't forget to ask your supervisor. Whatever they say obviously overrides any advice you may get from strangers on the internet.
Upvotes: 5 <issue_comment>username_2: [@username_1 answer](https://academia.stackexchange.com/a/170211/18238) is absolutely on point, to add just another point.
Please highlight what is the objective of this data, what kind of difference it shows and represent it in values rather than publishing the raw table.
In any case this video about data visualization might help you to get your point through. But in nutshell each table has a story to show where the objective is relativity to the other values. Thus highlight the relation and not the data.
[Storytellingwithdata-googleTalks](https://www.youtube.com/watch?v=8EMW7io4rSI&ab_channel=TalksatGoogle)
Upvotes: 3 <issue_comment>username_3: [@username_1's answer](https://academia.stackexchange.com/questions/170207/large-table-formatting-ideas-in-dissertation/170211#170211) already mentioned getting your advisor's input, but before that, I would suggest checking your university's thesis format requirements. My university, for example, has guidelines specifically for multi-page tables (I had to know this for what ended up being a 15 page table in my masters thesis). Then, within whatever guidelines your university may or may not have on the matter, you can go to your advisor and get their input. You don't want to assume your advisor is aware of every little formatting requirement your university requires.
Upvotes: 2 <issue_comment>username_4: [@username_1's answer](https://academia.stackexchange.com/questions/170207/large-table-formatting-ideas-in-dissertation/170211#170211) has a great point about whether or not you need to show all your data, but you might want to think about whether you need to show all the data in a single table. If you have multiple points to make it might be more useful to break the one giant table into multiple smaller tables that each have a single focus.
Upvotes: 0 <issue_comment>username_5: I assume you seek to include a large table in *print* thesis and you live in a country with access to paper in either one of ISO A series (if unfamiliar to this definition, see [here](https://en.wikipedia.org/wiki/Paper_size)) and you already know which data really need to stay (ask a colleague, your supervisor).
*Do not* shrink the font size to add more content on an already jammed page, but consider to print the large table on a sheet of ISO A3 (landscape orientation), fold it, and bind this into the rest of your thesis print on ISO A4 (portrait orientation). This is much easier (than with standard US paper formats)\* because within the ISO A series, the long side of a smaller paper format is as long as the short side of the next larger paper format.
You do not need to fill all the A3 page in the horizontal direction. It actually is better to use the same font size on the lager page as in the other part of your thesis. For the potential trim to a smaller size of «the table page» (then neither ISO A3, nor ISO A4) and folding, reach out for help by the staff members of a good printer's shop, who will bind it with all the other pages into your thesis for you.
It equally is a technique you find for business reports comparing *selected key figures* of debit and credit about the year to be reported side-by-side with the numbers about year before the one of principal interest. On occasion, architects, engineers, etc. use it to include their plans and drawings. For illustration, see for example [this](https://www.youtube.com/watch?v=6ibwtrbNtRA) or even [this](https://www.youtube.com/watch?v=2Nl-mvAHpy0) video.
\*) It is not an insurmountable obstacle for a good printer's shop.
Upvotes: 2 [selected_answer]<issue_comment>username_6: I do not understand what is in the table, but if there are 45 important columns and 15 rows, one can consider to make each row a full page where you list the column items, maybe add an executive summary per this page and then make a new table where only 15 rows and 2-5 columns (the gist) is displayed.
This is if all the information is really important and cannot be transformed to graphs.
Upvotes: 1
|
2021/06/21
| 2,202
| 8,601
|
<issue_start>username_0: Conference season is coming up and my supervisor signed us both up to a series of conferences that are running over the next few weeks. Having already participated in a couple of them, I kind of know what I'm in for and wondered how you guys tackle online conferences.
Is it bad that I sometimes don't/can't pay full attention to certain talks?<issue_comment>username_1: My approach is to treat the presentations like an offline presentation as much as possible: force myself to sit back, hands off my computer, I'll even try to take notes by hand. Anything that keeps me away from the keyboard, cause if I'm there, I check my emails, and ... you know. Sometimes I view stuff on my tablet and sit in an unusual spot (couch, even outside) so I stay away from my desk.
Apart from that I treat attention as a (in my case very) limited quantity. I admire the people (often older PIs) who seem to be able to pay attention to talks for 12 h straight. I cannot. I'll scan the program in advance and make sure to focus on the talks which will be of the highest relevance to me. (Of course having too narrow of a focus can mean missing out on unexpected gems and inspiration. But if I try take everything in, it will be closer to nothing. There's a trade-off to be made here.)
Update: PLoS CB's [Ten simple rules for attending your first conference](https://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1009133) also has some complementary suggestions, see in particular rules 5 and 6.
Upvotes: 6 <issue_comment>username_2: [@username_1 already mentions](https://academia.stackexchange.com/a/170214/18238) many useful strategies that primarily revolve around being away from the computer or at least having your hands off its keyboard.
Other strategies in this regard that I've seen people use or use myself:
* If the talk does not actually require looking at the screen -- say, if it's mostly a conceptual talk without pictures or formulas -- then go for a walk and listen to the talk on your phone while you're out there.
* Do something else with your hands. My wife knits, I sometimes chop vegetables or fruit for the next meal. That's a job I can do while paying attention and at least part of the time watching, and I'm not tempted to do something else.
* Be on a treadmill or a stationary bike and do light exercise.
Upvotes: 4 <issue_comment>username_3: ### Practise your PowerPoint
I had to give a presentation for my company at a Euspen session a couple of weeks ago, at very short notice (like less than 2 days). I did a practise run with Zoom and PowerPoint, and quickly found that the two don't like each other. Normally you'd share the application and that'd be it, but when you run a presentation with PowerPoint then it'll try to put your presenter notes on Zoom instead. It's easily fixable (either share that screen instead, or use the slide view), but you need to be ready for it.
The presenter before me was not ready for it, and imploded rather badly. Since he was from our biggest competitor, I felt sorry for him, but not that sorry. :)
### More generally, marshall your attention for talks which warrant it
You've got the list of talks. See what you think is relevant and what isn't. You can stay focused for 10 minutes more easily than for an hour.
Upvotes: -1 <issue_comment>username_4: 1. Keep your video turned on
2. Force yourself to ask at least one question during/after the talk
3. Plan your schedule so that listening to the talk is one of the main things you want to accomplish that day
4. Use a website blocker to block sites like gmail, stackexchange, reddit, etc.
Upvotes: 3 <issue_comment>username_5: First of all, record it! Doing that, you feel free to review parts you missed by inattentive moments or interruptions, rewind, fast forward or play fast the entire conference.
Thats all you need!
Upvotes: -1 <issue_comment>username_6: One thing I learned over the last year is to empty your schedule. If you go to a conference during normal times, you have a week without meetings or most other commitments, to focus on the conference. Do not try to have an online conference on top of your regular work day! (Especially if the presentations are in a different time zone, and happen during your afternoon / evening. Yes, theoretically you could fit both in your day, but you will not find focus on either. Trust someone who tried that already...)
Cancel your meetings for the week, and focus on the presentations. Otherwise your focus, your work life and your mental well being will suffer from the conferences.
Otherwise I can only agree with @username_1 and @username_4: focus on the most interesting talks, and do your best to ask a questions at the end. It both shows you payed attention, and makes you think about the material, in order to come up with a question.
Upvotes: 2 <issue_comment>username_7: When I need to focus on a video stream, what I usually need is:
1. Paper, lots of;
2. Pencil.
And then I focus on taking notes. Every minute or so. This will:
1. Keep your hands busy;
2. Keep the keyboard away;
3. Keep your attention on the video.
I'm not transcribing the video. But I focused on annotating good phrases, keywords, individual names, sketches... As if I will need this after, in a way to easily find something that is somewhere in some video.
And it's exactly that which occurs afterwards, by the way...
Upvotes: 1 <issue_comment>username_8: >
> my supervisor signed us both up to this series of conferences that are running over the next few weeks.
>
>
>
Whether it's real-life or online conferences - unless you both have extremely high stamina, or you absolutely must attend (e.g. since you're presenting) - attending multiple consecutive conferences back-to-back with a full(ish) schedule is not a good idea, and few people would be able to maintain proper focus after, oh, a week of that I would think.
>
> Having already participated in a couple I kind of know what I'm in for and wondered how you guys tackle online conferences?
>
>
>
Not all that well frankly. Not only is it tiring, but after enough days staring at a monitor I can get a migraine.
>
> Is it bad that I sometimes don't/can't pay full attention to certain talks?
>
>
>
Your having physical and psychological limits is not "bad". Also, even if we had told you: "Piece of cake, no problem" - you can't take others as your measuring-stick. It's not as though you've made some wrong moral choice or something. Every one of us has their limits.
Anyway, I would take the time to go over the schedule carefully - even if the first session of the day has already started - and to actively avoid diving head-in to session after session after session. Instead, make sure you identify some cant-miss ones. Beyond that - don't be too "adventurous" with adding sessions that are close to your must-attends. Allow for some time to take a stroll, a jog, or even a nap (Yeah, I said it. A proper actual nap.) between sessions if you can no longer concentrate, or are getting annoyed, or having a headache etc. Also remember you don't fully recharge after a night's sleep; nor fully recharge after a couple of days of rest between conferences.
Finally, find the time to absorb and reflect on sessions you've attended, during and after the conference. Possibly with colleagues or even with your supervisor if your relationship is pleasant enough.
Upvotes: 3 <issue_comment>username_9: 1. Put your smart phone away.
2. Put your earphone on.
3. Close the door so that you are not disturbed by the noise outside.
4. Get a notebook and a pen to be ready to write something important down.
5. List some questions you want to get answered from the presentation and keep listening to find out.
6. Remove anything that might attract your attention on your desk.
Hope this helps.
Upvotes: 2 <issue_comment>username_10: Don't eat too much. The hunger will keep you awake -- The opposite of, "A large meal will make you sleepy". Also, don't overdo the caffeine. A cup or two of coffee can do you some good, but as you know, too much can really mess with your sleep pattern.
Upvotes: -1 <issue_comment>username_11: If possible, meet with other people (your advisor, other PhD students) and watch online together. Either at one of your houses or in a room in your university. Being distracted when other people are in the same room is less likely (peer pressure).
Obviously,this depends on the pandemic situation at your place, that other people are willing to join in, and that you want to attend the same talks.
Upvotes: 2
|
2021/06/21
| 1,328
| 5,786
|
<issue_start>username_0: As I understood correctly, in the field of computer science, there is no *"standard notation"* in scientific publications for explaining various aspects of a computer system. Even though in each discipline there exist some conventions for denoting various elements in the system (e.g., in networking and communication, usually `N` refers to a node and `T` is time, etc.), for them to be used in a scientific paper, usually one needs to define them in the paper anyways.
My question here is that, how can I properly continue using an already available and well-defined set of notations, terminologies, and the system model from an already published article in my new paper such that I don't require to re-define or explain them?
To provide a specific example, [in this paper](https://arxiv.org/abs/2103.08983) I defined a set of notations as well s a system model. If I want to continue using the same notations and system model in my new paper, how can I properly cite it in my current paper and skip re-explaining them without making potential confusion for reviewers?
A real example (with links) would be much appreciated so that I can discuss them with my co-authors.
---
**Update 1:** As stated by @DaveLRenfro and @Anyon in the comments, it looks like a common practice in the field of mathematics and physics to refer to a previously published paper for terminologies and notations. This question, though, is concerning more on a similar practice in the field of computer science.
**Update 2**: Providing evidence regarding reusing terminology, notation, and system models in any academic paper in the field of computer science would be highly appreciated.
**Update 3**: There are mixed opinions about re-using terminologies from another paper, but in this case, I believe it's preferable to do so as it's in line with the paper that I want to borrow terminologies and more importantly notations from.
**Update 4**: As discussed in the comments (now moved in the chat), there are over 3 dozens hits in Google scholar for the phrase "[for terms not defined in this paper](https://scholar.google.com/scholar?q=%22for%20terms%20not%20defined%20in%20this%20paper%22)" but almost all of them are either related to the field of mathematics or physics.<issue_comment>username_1: I'm not sure why you think that for things that require precise definitions that the standard practice should be any different in mathematics and CS (I've studied, published, and taught both).
It may not do your readers a great service, but it is enough to say that your notations, definitions of terms, etc. are taken from a specific cited source. This saves you from any charge of either plagiarism or copyright infringement.
It is a disservice to your readers only if it isn't likely that they have already seen the earlier paper and need to read it before they can begin yours. It is also, in that case, a disservice to yourself as some won't bother.
However, many people will be able to discern much of what you write from the context provided that it is well written.
But, if you cite the original source of your notations, then a brief paraphrase of the meaning of a term at first use would be both acceptable and helpful.
If you want/need to repeat full definitions from the earlier paper then citing, again, solves the plagiarism issue, but you need to consider copyright. Some things can't be copyrighted at all if an idea can be expressed in essentially only one way. The derivative in math is like that. The idea is too bound with the expression to allow copyright of the expression. (Ideas can't be copyrighted - only expression). But in other cases you might need to both cite and paraphrase if too much of the earlier expression needs to be copied otherwise.
And, when you do copy, make sure that you clearly denote what is copied with such things as quotes or indentation.
But, again, the practice of mathematicians and computer scientists isn't really different for such things.
Upvotes: 2 <issue_comment>username_2: In principle, there is no reason that definitions/notations cannot be cited, just like substantive arguments or observations or other parts of a paper. However, it raises the practical problem that the reader has to have recourse to another paper in order to understand the notation and meaning of terms in your present paper. Irrespective of the particular field, the Golden Rule in all such cases is to *be clear and make things easy for your reader*, so that should be your guiding principle in this decision.
The dilemma you are experiencing only comes about when the definitions/notation are sufficiently volumous that you want to avoid a long repitition of them. So, first have a think about whether there is a way to present the required definitions/notation in a more parsimonious form. Ask yourself: have I reduced this to its simplest form? In the event that you can significantly reduce this material, you might find that it is not onerous to put it directly in the new paper.
If you decide to rely on definitions in a previous cited paper, I strongly recommend that you at least give some paraphrased heuristic summary of what the definitions mean. This is something that is useful to "jog the memory" of a reader who has read your other paper, and if your heuristic explanation is good enough, it might even serve as an adequate substitute for the formal definitions in some cases (allowing a reader to roughly understand your paper even without going back to the previous one).
There is no silver bullet here, so ultimately you will have to decide on the appropriate trade off between parsimony in the present paper, and reader convenience in having a set of notations/definitions in one place. Good luck.
Upvotes: 2
|
2021/06/22
| 1,116
| 4,886
|
<issue_start>username_0: I currently work as a software developer and I am curious about machine learning research.
Does it work if I can spend some extra time to read papers/collaborate towards ML research, or is a Master's degree the only option? If yes, how can I network to find like-minded ML enthusiasts with whom I can collaborate?
I do not have any professional experience as a machine learning engineer/researcher. I have done an undergrad course in ML and a Coursera specialisation in deep learning.<issue_comment>username_1: There are two paths to doing research.
The bottom-up approach starts by learning the basics, through such things as reading and coursework, maybe online. A masters might lie on this path, but note that not all masters degrees have a research focus. Courses do give you the basics and reading can get you closer to research. But some papers are difficult without having the basics. Authors can make a lot of assumptions about the knowledge of their readers.
The top-down approach starts with a problem that you find interesting and is likely to be interesting to others and then do "just enough" reading to understand the problem and design and carry out a solution. Along with reading you can discuss your problem with others who might have the background you lack, provided you can connect with them and get them to talk to you. Looking for possible extensions to existing papers might be a way to find a problem.
The first is sort of breadth-first and the second is like depth-first searching for help.
But, everyone on the planet at the moment, along with all their siblings, seems to want to be studying and researching in ML at the moment. It is among the hottest of the hot topics currently. It will be hard, using either approach to be successful since the ideas are "out there" accessible to all those others. So, there is almost certainly a ton of parallel research going on on a very large number of approaches. Getting to the front of this pack is going to be challenging unless you already have gone some way on one of those two approaches.
You find collaborators at conferences pretty regularly, but you have to read the papers and meet the authors. You also need something to add if you want to get a collaboration going. The bottom up approach (via coursework) gives you access to professors who might give you a boost.
My advice is to give one or the other approaches a try and see what you can do. But, keep flexible. Don't commit too much time or intellectual effort to a too-narrow approach. Keep your options open. Either of the approaches gives you skills, not only in ML but also for what might be the next-big-thing.
Upvotes: 1 <issue_comment>username_2: This might be rather narrow, but it could still achieve what you're after: work on a [chess engine](https://chess.stackexchange.com/questions/26489/creating-chess-engine-machine-learning-vs-traditional-engine). The current strongest chess engines, [Stockfish](https://stockfishchess.org/get-involved/) & [Leela Chess Zero](https://lczero.org/contribute/), are both open-source community-driven engines that use machine learning. If you join them, you'd have ready access to lots of experienced developers as well as the hardware to test your ideas. Plus you can work as much or as little as you want.
I don't know how often this kind of work leads to publications, but it's certainly research since you'd be pushing the envelope into the unknown, and if it comes to it you can cite it as research experience for whatever application you might write in the future.
Upvotes: 2 <issue_comment>username_3: If your goal is to eventually publish in one of the top conferences (nips, icml, iclr), realistically your only viable route is to get a phd. Actually, one of the most famous AI researchers in the world, <NAME>, was working full time in software development before *starting* his PhD at the age of 28. If you are serious about doing research and want to make it your career, getting a phd is more or less your only viable route.
Obviously this is a very big commitment and life change because you would have to take a major pay cut, probably relocate, and devote the next several years of your life to your phd research. If you're not ready to make such a big decision, I would suggest that you find a paper(s) you're interested in, that doesn't currently have a public github code, and implement that paper's method. This can get your some experience and visibility. You could also try to find some existing repository and try to improve/add to it in some way. Either of these options won't result in publications but can get you some very tangible experience that can look good on your cv. And unlike trying for an actual publication, this is something you could realistically make progress on while also working full time.
Upvotes: 3 [selected_answer]
|
2021/06/22
| 4,799
| 20,640
|
<issue_start>username_0: Thanks to Covid-19, I have been watching many video conferences and online presentations (both formal and not) **in the humanities**, specifically on printing, paleography, manuscripts, analysis of paper, libraries, etc.. Many, but not all, the presenters read their papers verbatim. Unless you are really experienced and talented, reading a paper sounds monotonous and tedious. It is a much better experience to listen to someone just discussing their topic, using the paper or slides as notes or a prompt if needed. My question is are academics expected, advised or required to read their conference papers verbatim?<issue_comment>username_1: I'm sure that this isn't a general practice, though I don't know about the specific field. I agree that listening to someone read a technical paper is boring and a waste of time.
I once had a course in which the prof read verbatim from his notes while projecting some thing (art, actually) on a screen. At the end of the class a bell would ring and he would stop, even if in the middle of a sentence and put a mark in his notes. Next day he would take it up from the mark. One of the worst (not *the* worst) courses I ever had.
Unless it is specifically expected in your field, do something more spontaneous - more interesting.
Those who are truly interested in your paper/results will read it. Those who only want an overview should get enough to satisfy them from a less formal presentation.
---
Edited to add, as a service to <NAME>:
The worst course was a "special studies" course in projective geometry where we two students met in an office. The prof would bring in his notes, place them in front of us and leave. We were expected to copy them verbatim for an hour. No questions, no interactions. Sort of like Medieval scribes, I guess.
Upvotes: 5 <issue_comment>username_2: >
> are academics expected, advised or required to read their conference papers verbatim?
>
>
>
Academics are advised and expected (but not required) to give engaging, effective talks that add some value beyond what is already available to their audience in print form. This expectation is not compatible with reading your paper verbatim, which is about the most boring thing I can imagine having to listen to.\*
Disclaimer: like everyone here I am only familiar with certain disciplines, so I cannot rule out the possibility that there are conferences that actually do require researchers to give bad talks (or that have more specific requirements that amount to the same thing).
\* The only exception I can think of is in a creative writing reading event, where an author reads from their work. In that case one expects the added value to come from the author’s use of tone of voice to convey additional nuances of meaning - they are effectively playing the role of a voice actor, and since they are presumably very familiar with their own characters and the meaning of their text, they can probably give quite an engaging presentation in this way.
Upvotes: 3 <issue_comment>username_3: This is a common method in the humanities, often in a lecture hall, they also sit down which adds to the "undynamics".
I have no idea where this comes from other than from a desire to always use perfectly polished and honed language - as is the case in a written article.
But I completely sympathize that this is a very annoying style and is beaten by a modestly interesting slide-based stand-up presentation in terms of excitement. Unfortunately, there is not much comfort I can offer, as this seems to be the norm in this topic group (although I have witnessed some excellent humanities talks, in different style, and yet perfectly polished).
Upvotes: 6 <issue_comment>username_4: My main experience is with the social sciences, which are arguably as close to the humanities as to the hard/natural sciences (depending on the subfield). It's totally uncommon for a presentation to be read verbatim from a manuscript, and for the reasons you mention, it's widely regarded as bad practice in my field as well.
Upvotes: 3 <issue_comment>username_5: As someone linked to the humanities as well as applied sciences, yes, it is very common for papers to be read, although the number of conferences that actually **force** you to read verbatim is fairly limited. In the large majority of conferences, speakers have as much freedom as they want to deviate from the written text, so long as they are able to successfully convey the key points included in it. I think most presenters in the Humanities would agree that just reading a text verbatim and doing nothing else to engage the audience is not a great look, but somewhat acceptable. It is possible to make good presentations, regardless of whether one reads a text, or speaks spontaneously. In the end, I would guess that Humanities scholars are more interested in debating the actual evidence, rather than focusing on the presentational skills of the speaker. So this answer is not intended to defend the practice of reading texts, just to explain why it happens.
Having cleared this up, there are a few reasons for reading a paper verbatim at (some) conferences.
1. **Tradition.** There is a long-established tradition of reading one's
research before an audience that goes back all the way to the 17th
century (and in some places, all the way back to the Late Middle
Ages!) at the time when the first official Societies and Academies
of Science were established in France and other parts of Europe. The
normal thing to do was for a speaker to read the text before an
audience, and soon after the text would be published in the Bulletin
of the Academy and sent to all of its members who could not attend
the presentation. These printed speeches are now considered to be
extremely precious historical documents, because they are the only
remaining testimony of the research done by those researchers. After
their death, their notes and personal documents were often lost.
Many printed presentations have crucial research about long-lost
tribes and languages which would be impossible to replicate today.
Even today, many presentations are prepared in advance with the
intent of getting them printed somewhere (more on this below).
2. **It helps to maintain overall time schedules (to some extent) in this current paradigm of large-scale international conferences**: humanities scholars are famous for
rambling endlessly, losing track of the key point, and going way
beyond their allotted time. Therefore it has become more and more
common to see reputed conferences demanding their presenters to
submit beforehand to the session chairs not just the paper draft,
but even the presentation slides as well (if they exist). These are
then reviewed by the session chair(s) who will recommend edits and
changes. Unsurprisingly, most of the proposed changes are for
cutting down parts of the text and to tighten up things, in order to
make sure that the spoken text can fit within the allotted time. Of course, this alone cannot guarantee that everything will go smoothly, and I do regularly see some delays with read texts. At the end of the day, we (the audience) are all at the mercy of the speaker's presentational skills, regardless of whether they read a text verbatim or speak freely.
EDIT: This section here is intended to refer to an interesting paper (from the 90's) posted by *username_6* (included in this thread) about the frustrations of holding conferences, with time delays, bored, unenthusiastic people, and not all that interesting discussions. This paper is an opinion piece, and I'm sure that you can find just as many people who would agree as well as disagree. The key point I want to emphasize here is that the actual scale of conferences these days is much bigger now.
In the past, it was sufficient for a conference to simply use the lecture halls of a university, and even though time was wasted, it was not such a big problem. However, the largest humanities conferences today have reached such a scale that they often need to rent actual large-scale conference spaces, because not only are there more graduate students presenting now, but there are also many more people coming from abroad to attend and present as well. It should also be kept in mind that Humanities conferences don't usually have the same large budgets as some hard science fields. There are severe financial penalties for not closing the venue at the contracted time, so those conferences are much more strict about requesting texts in advance, reviewing/editing them in advance, and also preparing the session chair's response in advance.
All I can speak about is my personal experience: I have presented at three types of large-scale conferences (those that ask to read verbatim, those that give you complete freedom, and those that give you some freedom, but ask you to not deviate too much from the text). The scale of these conferences is approximately 100-140 speakers total, over 3-4 days. In the case of the conferences that ask speakers to stick closely to the written text, I do see some delays, but overall things tend to go more smoothly, without too many bumps. As for the conferences that had no restrictions (always held on university lecture halls), they were pretty much disasters from start to finish (although I did see a handful of excellent speakers).
1. **The "bread and butter" of the Humanities is the deep analysis of textual materials, leading therefore to a focus on describing minute details, rather than just summarizing one's results.** Difficult fields such as Philosophy and Religious Studies require a
very high level of precision in regards to the terminology employed,
the definition of terms, and the method for exposing one's
arguments. Otherwise, it will be very difficult for the audience to
adequately apprehend the topic and its many nuances, or provide any
meaningful discussion or debate around it. For example, on an
average Humanities presentation the speaker is often required to
quote numerous textual passages without any mistakes, and do what is
called a "close reading" of those passages, which requires the use
of very precise language. These two things alone tend to push people
into reading texts, rather than talking more spontaneously. It's
already challenging enough to talk about things such as Ontology,
Proto-Indo European or Esoteric Buddhism, now try doing it on a
strict 20-minute limit, without any text to guide you along the way.
By reading a text verbatim, the speaker can have enough time to hone
the text beforehand, and make it as clear and well-structured as
possible. This helps everyone in the audience to stay on the same
page. For instance, if you show a mathematical equation such as
e=mc2, everyone in the audience will probably grasp the idea
perfectly without any ambiguity. But if you mention a concept such
as "Dasein", the whole audience will immediately produce 10-20
different meanings, because this term (and many others like this)
has been interpreted in numerous ways by different authors. This
kind of situation generally forces the speaker to be much more
careful about what they say, and how they say it. Regarding disciplines that are more oriented towards the social sciences, and who do not engage all that much with textual materials (Archaeology, anthropology, sociology, etc.), there is less tendency to read written texts.
2. **To allow for adequate simultaneous translation and improve inclusivity of non-English native speakers.** It has become more and more common to conduct conferences (both
online and offline) that include speakers talking in different
languages, thus requiring the temporary hiring of translators.
Whenever translators are involved in a conference, having a prepared
text can go a long way to reduce costs and make sure that the
quality of the translation is at its best. This also makes
conferences more inclusive for speakers and attendees coming from
non-Anglophone regions.
3. **There is a desire for speakers to have their papers reviewed by experts and get them published in printed form as soon as
possible.** It is often the case that a presentation at a humanities
conference is the first step towards writing a full paper and
submitting it to a good journal. Since draft papers for popular
conferences must be submitted and reviewed in detail by the session
chairs, this provides a precious opportunity for getting valuable
feedback from experts. I have often seen actual papers published in
reputed journals, where the "Acknowledgements" section says
something like this: "The contents that comprise this paper are
significantly revised versions of two presentations made at
Conference A and Conference B; I would like to thank the session
chairs and the following attendees who provided valuable comments:
(Person A); (Person B), etc." In addition, it is very common for the
session chair to invite some of the speakers to publish their papers
as book chapters (or as a special edition of a journal) within a
book that they are currently editing. I myself have been invited by
a session chair to publish my papers for a few times, after my
presentation was over.
4. **Having detailed papers in advance allows for mutual discussion
between session speakers.** Before the session takes place, it is not
rare for the session chair to distribute all papers among the
presenters. In some conferences, the presenters are even expected to
meet for breakfast with the session chair to talk about each other's
papers, and then during the session itself, there might be a part
where the session chair and presenters give comments or questions
about each other's work in front of the audience. although not all
conferences do this, it does help to build up friendship between
presenters and sometimes leads to research collaborations.
Now, having said this, a skilled speaker will normally do various practice sessions in order to become familiar with the text, and allow for making direct eye contact with the audience while still reading from the text. Personally, I normally practice at least 5 times in order to test everything thoroughly: the flow of the spoken text, the flow of the slides and overall structure of the slides, etc. I also designate key moments where I stop looking at the text and point to key aspects of the slides with a mouse or a laser pointer. All of this preparation time pays off handsomely, as it helps to make the reading text feel much more natural, and to better engage with the audience. I often tend to get better response from attendees after the presentation is over, which leads to being more successful at networking.
By the way, here is a funny point: if you see someone (in a Humanities academic conference) who is apparently talking about a topic freely (NOTE: I regret the use of the word "freely", see comments below) in a skillful manner, it's more likely than not that they are reading from a pre-written text, although they are able to disguise it by adding several short remarks that give it the impression of being more "natural". Likewise, all of those TED presenters are actually reading from teleprompts, but they had to rehearse the whole thing 20 times or more, so it's easier to "disguise" the fact that they are reading a carefully-prepared text.
EDIT: I would like to clarify that my answer should be interpreted within the context of Humanities academic conferences, especially those that deal in detail with the analysis and interpretation of textual sources, such as Paleography, Literary Criticism, Ancient History/Global History, etc.
Upvotes: 5 <issue_comment>username_6: I was thoroughly shocked to hear that such as thing as reading papers verbatim even exists, so I googled for the topic. The first hit was this very thread here on Academia.SE, but the second one was a paper from 1998 that seems interesting enough to warrant highlighting in an answer:
* [First Aid for Listeners: Why Humanities Conferences Need to Change their Format](https://doi.org/10.2307/1348291)
>
> My question is are academics expected, advised or required to read their conference papers verbatim?
>
>
>
The paper mentions that some conferences require presenters to read their paper verbatim, so you might not have the choice.
You seem to be sceptical that this is a good presentation format. The very existence of this paper shows that you are not alone. As you can see, there are scholars in the humanities passionately arguing in favour of more dynamic presentation formats.
---
Since the article is paywalled, here's a short summary of what it covers (keep in mind that this was written in 1998, and I do not know if conference formats have changed since then):
* Summarizes the various common presentation formats, but focuses mostly on the following one: "the papers are read verbatim from pieces of paper; a respondent may then read a prewritten response." This is said to be the required by many humanities conferences. Not all conferences have this requirement: some give the speaker a choice, and some even focus on discussing papers which were read in advance.
*Something interesting I learned from this is that in some cases not only the presentation is pre-written, but also the response.*
*Another important point is that in many cases, the papers are distributed in advance. The audience can read them before the talks.*
* While the abstract promises to explain "how and why this method has come to be accepted", the paper does not actually do this, other than noting that it is done "primarily in the interest of verifiability and validity".
* The rest of the paper argues against this rigid format and lists the benefits of "extemporaneous delivery". I believe that most people who frequent this QA site would find these arguments to be simply common sense. For example:
+ Written text is just harder to understand to listeners (as opposed to readers) than freely spoken text. Humanities professors speak freely in the classroom. They do not read from the textbook.
+ I am paraphrasing, but the author seems to be saying that such presentations are boring and cause the audience to lose attention and get fatigued early.
+ This format leads to less discussion. Discussion is the very purpose of a conference.
+ Speakers who read a pre-written text are more likely to run over time (or to be allowed to run over time), which again cuts into discussion time.
*It is interesting to me that @username_5 argues the opposite, i.e. that reading verbatim should help speakers keep within the allotted time.*
+ The general theme is that the main purpose of a conference is discussion and dynamic exchange. A dynamic format is much more conductive to this.
Upvotes: 4 <issue_comment>username_7: You are not required to stick to any of the guidelines for the talk. The only rule that can be enforced is the time limit. So, if you think that a different way of giving your talk is better than the way you are supposed to do it, then there is no reason why you should not do that.
I have deviated from the official script twice when I gave substantially different talks than the text submitted as the abstract in one case and as the article for the conference proceedings in another case.
Upvotes: -1 <issue_comment>username_8: ### Absolutely not!
Your observations here are correct --- a presentation that *looks* like excontemporaneous discussion is far better than reading a paper verbatim in the kind of monotonous tone you hear in some talks. Most good speakers are able to give a good presentation using their paper (or some bullet points) as a prompt, but they sound like they are speaking about the topic "off the cuff" and in a conversational style. When speakers read directly from a paper and sound monotone and dull, that is because they are bad speakers, not because of any required practice in academia. (Though one could be forgiven for forming the impression that there must be rules requiring bad speaking techniques in academia.)
As to whether people are advised to read from papers in this way, I have never heard of anyone giving such bad advice, and I certainly have never given this advice to any of my own graduate students. Some students are nervous speakers when they first start out, so perhaps people advise them to read from the paper as an initial method of keeping track of their presentation without losing where they are up to (i.e., walk before you run), but then the longer-term goal should be to develop the ability to speak extemporaneously about the topic and engage with the audience.
Upvotes: -1
|
2021/06/22
| 545
| 2,454
|
<issue_start>username_0: Several posts on this website discussed the negotiation process and tips for new faculty in US. However, I wonder if there are any possibilities or tips to negotiate the salary after being tenured and promoted to associate or full professor.<issue_comment>username_1: Generally speaking, promotion already comes with a salary increment. Traditionally, at least, though some recent changes, including Covid, may be changing that.
But the problem is that you don't have a lot of leverage at that point unless you are willing to move to another institution. Your contributions have just already been considered and rewarded. What additional things do you have to offer at that point?
The possibility/threat of leaving might be powerful for some, especially valued, individuals, but those are the folks most likely to get whatever financial incentives are available under a given system.
Some places with annual evaluations give faculty an opportunity to argue that their recent contributions have been especially valuable, likely to continue, and worthy of a salary increase. Take advantage of all such opportunities.
You probably have a better chance to argue for a change in job duties at a promotion point than a salary increment beyond the standard. Whatever helps you do your job better can possibly be supported.
Upvotes: 2 <issue_comment>username_2: In the typical nonunionized American university, salary negotiation is not very closely related to promotion. Typically there is a small automatic salary increase when you are promoted. You can ask for a higher salary at any time; you do not have to ask at the time you are promoted. (Some might argue this means the question cannot be answered.)
Two principles to consider when seeking a higher salary are:
* The university will not want to pay you more than your colleagues who have a higher rank. This is because they do not want to give your colleagues who have a higher rank leverage to ask for their own raise.
* Your salary request needs to be backed by evidence. This evidence could be:
1. An outside job offer.
2. Data regarding typical pay. You can get it from <http://data.chronicle.com/>, your disciplinary society, or your colleagues.
3. Things you have done to benefit the university.
If your university is unionized, your collective bargaining agreement will say what you can ask for. You can also get advice from your union representative.
Upvotes: 3
|
2021/06/23
| 1,231
| 5,210
|
<issue_start>username_0: I am currently writing my master's thesis. I came across paraphrasing tools quite recently too. I saw that sometimes my texts are not very clear and repetitive (although I can explain an idea in an understandable way). For documentation purposes, would it be okay if I use a paraphrasing tool for my own texts? An example is shown below :
My text :
This can reduce the burden on the hand worker to walk long paths before reaching the machine stations where they can view the actual data or updated info
Paraphrased :
This can alleviate the need for hand workers to go large distances before reaching machine stations where they can view real-time data or information.
It sounds very good from the tool T.T<issue_comment>username_1: This should be acceptable since your field isn't something like creative writing. Of course, your advisor needs to agree. But this is just another tool. Spell checkers are ubiquitous and, while this is a bit more sophisticated it doesn't introduce any ethical issues.
Be certain, of course, that paraphrasing doesn't change the meaning of what you want to say. But it can be helpful if the writing seems less pedantic.
OTOH, it is worth the effort to learn to improve your writing, say, by taking a writing course at some point. If you think the tool is providing an improvement, think about why that is for each use.
Upvotes: 2 <issue_comment>username_2: In looking at the example you gave, this could be disastrous for statements that have to be very precisely worded.
Examples:
1. Replacing "reduce" with "alleviate" is bad if you want to clearly suggest (although perhaps not explicitly state, because maybe you don't know) that what follows might not be eliminated.
2. Clearly, a task that is a "burden" for one to perform is not necessarily a task that one "needs" to perform.
3. If the first version is intended to suggest this for a specific (previously mentioned) hand worker, then the second version is obviously not the same.
4. For the second version, we don't know whether some of the hand workers can go to the same machine station or the machine stations must be different for different hand workers.
5. The first version could be interpreted to mean that the burden is having to walk *more than one long path* (because "paths" is plural), which could suggest that the *multiplicity* of paths is the relevant issue, and not the fact that the path/paths is/are *long*.
6. In the first version, "paths" might mean something specific (e.g. sidewalks), whereas the second version allows for non-path travel.
7. Clearly, each of "actual data" and "updated info" is NOT necessarily the same as, respectively, "real-time data" and "information".
Of course, for the actual context of what the original sentence might be intended to convey, none of these distinctions may be important. However, I'm treating the sentences you gave as an artificial example, not what you actually want to rewrite.
Upvotes: 3 <issue_comment>username_3: The writing in your question suggests that you are not a native English speaker. (That's an observation, not a criticism.) I think by "my texts" you mean your own words in your thesis, not words from textbooks or articles you have read and want to paraphrase.
As @DaveLRenfro notes, paraphrasing can change meaning a lot. You cannot use a paraphrasing tool to improve your writing style in an automated way.
I can imagine that such a tool might be useful. Try it on a sentence of yours. Then try reading the result as if you did not know what it meant and had to figure that out as a reader. You might think "this isn't what I want to say" - which would suggest that your original text was unclear, since it confused the paraphraser.
You might try something similar with a translator: write what you want to say in your native language and see what an automatic translation tool suggests.
As @username_1 says - work on your writing using anything that helps.
Upvotes: 1 <issue_comment>username_4: If you mean would it be okay *ethically*. the answer is yes, unless your institution has a specific policy forbidding it (which seems unlikely as I doubt very much this is something anyone has thought of forbidding, but it’s theoretically possible).
On the other hand, if you mean to ask whether it’s a *good idea* in the sense that it will actually produce better writing than what you put in to it that preserves the meaning of what you want to say, the answer is almost certainly not.
A good way of getting some benefit out of the paraphrasing tool perhaps is to run it on your paragraphs and use the output just to get some ideas about alternative ways that might exist for saying what you want to say. But to produce a coherent, readable text you need to be 100% in charge of the writing and hand-pick every word and every sentence in your text. Letting the tool be in the driver’s seat and just blindly using its output without understanding the nuances of language involving differences between words and other language structures that you used and those that the tool substituted for them (like “reduce” versus “alleviate”) is, in my opinion, a recipe for disaster.
Upvotes: 4 [selected_answer]
|
2021/06/23
| 1,177
| 4,528
|
<issue_start>username_0: Suppose I read a research paper and write an email to the *authors*, requesting some resource or asking a doubt. I know the email address and author's name from research paper.
What is the proper salutation? Is it proper to use "Dear sir" or not? What are the other alternatives to salute?<issue_comment>username_1: Generally, if you're unsure, you can never go wrong with Dear Prof. Dr. Surname. If that person is not a Prof., no problem, they'll correct you if needed.
I would only use Dear Sir, if I was unsure to whom (person-wise) my e-mail is addressed, for example, if I'm contacting an organization.
Upvotes: 2 <issue_comment>username_2: Look up who the corresponding author is, if it's a co-authored paper. This is usually indicated in a footnote. If it's a single-authored paper, use the name and title of the single author. Let's say the name and title are Dr. <NAME>. Then the proper salutation is simply:
>
> Dear Dr. Doe,
>
>
>
Upvotes: 3 <issue_comment>username_3: *Dear sir* reads as an Indian english address to me. I don't know whether it would be the preferred address in an Indian academia context, but it sounds a little odd in an international context.
If you want to be formal and are writing to US or UK people, then the form of address is "highest title - lastname". This could take the form of "Dear <NAME> and Dr Doe". In the US, everyone from assistant professor upwards is addressed as "Prof", whereas lecturers, senior lecturers and readers at UK universities are addressed at "Dr".
If you are addressing people in Germany or Austria, the formal way is to list all titles + last name. So here the typical form is "Dear Prof Dr Schmitt". If you were to write in German, you'd add in a "Herr" or "Frau" as well, so "Sehr geehrte Frau Prof Dr Schmitt".
The many countries not mentioned here will also have their own customs. However, given that academia is a very international endeavour, and given the US cultural dominance in international stuff, no reasonable person will be offended by a "Dear Prof X"/"Dear Dr X" address.
Upvotes: 5 <issue_comment>username_4: Nowadays I almost never use salutations in an email and just start straight in with my message. I see email as different from formal paper written correspondence and I believe that many recipients see it different also.
Just as in Stack Exchange, I treat salutations, greetings and signature blocks as pure fluff that get in the way of the communication. However, others may have a different view.
I am, however, quite careful to ensure the leading sentence is polite, and explains the purpose of my query. Something like would be the very first line of my message:
>
> I am interested in your paper "...." and am wondering if you would be able to answer a brief question ....
>
>
>
I always try and make the subject line clear as well, knowing some sort email by seeing sender and subject before reading the contents.
So my answer is remove the salutation, and it avoids the whole question of getting it wrong!
Upvotes: -1 <issue_comment>username_5: Some recipients might be offended, but there is no rule or set of rules for salutations in e-mails. In theory, "Hi…" is as good as anything.
To be most polite, you would follow the same etiquette laid down for "real" letters on paper, which is best expressed in full editions of Chambers Dictionary and Debrett's Correct Form, backed up by Burke's Peerage.
Be careful using "Dear sir" which is insulting without a capital "S" or if the recipient isn't a man and lazy if the excuse for not using a particular name was that you were writing to a company.
Upvotes: 1 <issue_comment>username_6: **Dear Prof. X**
Even if I suspect the person is a grad student I use the above. (And others did to me when I was.) Hard for anyone to be offended, and you don’t need to be overthink it. Some may find it overly deferential but few will find it insufficiently so. “Dear Sir” is worse as it presumes a gender. “Dear Dr” is worse as in some (European) countries you risk demoting someone. “Hi Firstname” or “Hi” or “Good morning” risk being seen as insufficiently polite by people who get upset about this kind of thing.
Sometimes I write to people I know in their role as editors, etc. Then I will sometimes use “Dear Prof. Lastname, Dear Firstname”. Someone did the same to me today. I wasn’t offended by the additional formality.
As someone whose name is frequently mis-spelled I urge you to double check it before hitting send.
Upvotes: 0
|
2021/06/23
| 903
| 3,725
|
<issue_start>username_0: I just graduated last semester and I just received a bill from my college claiming $125 in "room damages". I contacted the office of res life about it and they claimed that I "turned in the wrong key" and so they needed to replace the lock. Their claim is blatantly false as I did turn in the key that I was given. I emailed back and forth with the office pointing out it doesn't even make sense where I could have gotten the "wrong key" from. They basically dug their heels in and said they are not going to drop the charge as they have already replaced the lock.
My question is are there serious consequences to ignoring the charge as I have already graduated?
Should I just relent and pay the fee? I called the business office directly and they said they would only drop the charge with the approval of res life.<issue_comment>username_1: Some colleges won't send out official transcripts (to employers and such) unless all fees have been paid.
You might get "sued" in small claims court in some jurisdictions, which makes it a legal matter.
Neither of the above is necessarily going to happen.
Upvotes: 4 <issue_comment>username_2: Due to the restricted/controlled number of "blank" degree that can be printed on each year, the last degree I obtained can only be issued around June (the printing is controlled by the government I guess, on each degree there's a code so that anyone who wants to verify it can go to a site to check the number, no notary of any kind is needed for the photocopies). So if you graduate in September for example, you can have ceremonies, parties, etc. but you'll have to wait until next June to receive the "real" diploma (in the meantime the school will give you a certificate).
In my college before that (in another country), the degree could be given on the spot, just a day or two after my defense, but I needed to turn in a form signed by all relevant offices (library, admin, accountant, etc.) showing that there were no unresolved debt or issue with them.
For the amount you mention in your question I think they won't press charge or escalate it to the legal department, but think about situations when you still need official documents from the school.
Upvotes: -1 <issue_comment>username_3: >
> Should I just relent and pay the fee?
>
>
>
At least from US perspective, you may need a transcript or other official correspondence from your university at some point. Therefore, resolving the problem is probably in your best interest.
If you fight long enough and hard enough, the university will likely eventually relent and remove the charge. You might need to escalate the issue to multiple deans, pressure the university on social media, contact members of the press, or even file your own lawsuit against the university.
So then the real question becomes, is all that effort and time worth it? Imagine such a process taking you 20 hours. Is your time worth more than $6.25 an hour? It might even take *much* longer. Sometimes the principle of the issue has intrinsic value. Enter this into your calculation.
If it were me, I'd just pay to make the problem go away.
Upvotes: 5 <issue_comment>username_4: In the U.S. there are laws controlling debt collection. If an amount is in dispute, the collector can't start "collection activities". I don't know exactly what activities this covers and it probably varies by state, but withholding transcripts might fall under "unfair collection practices." I found this article:
<https://www.consumerfinancemonitor.com/2019/10/10/ca-enacts-law-prohibiting-postsecondary-schools-from-withholding-transcripts-as-debt-collection-tool/>
which says that California has prohibited the practice.
Upvotes: 2
|
2021/06/23
| 717
| 3,045
|
<issue_start>username_0: I started writing a book on my own in lockdown about some critical social issues. For this I read some books online to gain more knowledge about the issue I am writing about (all books are online), and I read some news articles as well, which are as important as the books. The problem I face is this: how do I store those article in a well-structured manner? For example, if I am reading some articles on some specific topic, how I store them so that when I required I find them again later?
Edit:) thanks for ur kind response to all,I felt that I need to somewhat elaborate more about my problem, actually I did download Mendeley after getting referred from this site, but the problem is it store the only citation not the complete page of the article (for e.g. [here](https://www.thehindu.com/sci-tech/energy-and-environment/what-is-the-reason-behind-the-fall-in-water-reserves/article23785470.ece)) now I want to store this complete article for future reference and I want to store this kind of articles in a right folder.<issue_comment>username_1: What you are looking for is one (or more) of the following:
* [Reference management software](https://en.wikipedia.org/wiki/Comparison_of_reference_management_software) allows you to store, organize and retrieve bibliographic information about literature. Almost all of them interface with word processors to output citations and reference lists, and online sources, such as journal websites and the DOI system, to easily obtain bibliographic information. Most of them also allow you to include your own notes with any entry.
* [Qualitative data analysis (QDA) software](https://en.wikipedia.org/wiki/Computer-assisted_qualitative_data_analysis_software) allows you to highlight, store, and organize relevant sections and quotes from all kinds of digital texts (including ebooks and articles). These text snippets can then be analyzed to discover relations between them; very useful to organize literature around various topics, e.g. for a literature review/survey article.
* [Note-taking software](https://en.wikipedia.org/wiki/Comparison_of_note-taking_software) allows you to ... take notes. Some are very minimalist, and some are infinitely extensible. With [org-mode](https://en.wikipedia.org/wiki/Org-mode), for example, you could probably compile your own hybrid form of note-taking, QDA, and reference management from scratch.
Upvotes: 3 <issue_comment>username_2: This is actually a really important question in terms of using sources that are perishable. I would recommend first making sure that sources are archived using the wayback machine, that way the current versions are accessible to you and everyone in the future.
I solve this problem using Zotero! Zotero works a lot like Mendely, but it's browser plug in is really nice and can save PDFs and the HTML website linked to the citation. I also use betterBibTeX and Zotfile as extensions to improve the workflow and let me not only save the pages themselves but also my notes on them.
Upvotes: -1
|
2021/06/23
| 462
| 1,861
|
<issue_start>username_0: I attended a panel for early career researchers, and one of the panelists mentioned that at the university he was at, it is more challenging for the department to pay for the flight costs to host an applicant from abroad for interviews for tenure-track jobs. Is this common? Is this something I should watch out for before agreeing to travel abroad for a temporary position?<issue_comment>username_1: Whenever anyone invites you to travel - whether that's for an interview, talk, conference, whatever - you should **always** *either* ask them to confirm that they will pay your expenses, *or* decide that you are willing to bear all the costs (so that it is a pleasant surprise if they subsequently offer to pay).
In my experience, travel is not a primary factor in deciding who to interview. However, it is certainly something that can tip the balance on borderline cases. This is not purely financial: is it fair to ask someone to spend half a day travelling each way when I'm already 90% sure they're not the right person for the job?
In any case, in the Covid era universities have become much more comfortable interviewing remotely, and so this may be an issue that is less important going forward.
Upvotes: 2 <issue_comment>username_2: As a data-point: at my R1 U.S. university, in mathematics, we apparently cannot reimburse transcontinental travel expenses... so we cannot in-person interview people not already on the continent (in Canada, Mexico, or the U.S.).
This was a university policy for some years, prior to Covid and prior to possibilities of vaguely reasonable on-line interviews, lectures, etc.
(I'd imagine that the current possibilities of on-line lectures and interviews, plus the general economic fallout of Covid, would make transcontinental travel expense reimbursement even less approvable...)
Upvotes: 2
|
2021/06/24
| 1,543
| 6,094
|
<issue_start>username_0: I recently spoke to a graduate coordinator of a mathematics PhD program I was planning to apply to. I asked about admission, requirements to complete the degree and such. The program coordinator ended up telling me that usually they only pass 50% of the people in the program in the qualifying requirement, and the rest either get kicked out entirely or can finish with a masters. At first I thought this only happens maybe once in a while or something, but I asked for a clarification and he indeed stated that they have made that as their threshold since "not everyone deserves to earn a doctorate"?
This is a US program. Is this attitude here common? If it is, I'd much rather apply to European programs. I don't want to get kicked out because a program made a mistake in choosing me. I thought programs only admit students they believe are able to successfully finish the program. Apparently I was wrong. Is this the norm or even fairly common? I am confused.<issue_comment>username_1: I am quite surprised to hear that a serious math grad program in the U.S. still operates in this fashion.
Yes, decades ago, this was somewhat the style, sadly.
Our/my program has not operated this way for decades. Many other top-rated places that formerly did operate this way have changed, so that, yes, admission (with funding) is a vote of confidence.
So, no, so far as I know, such an approach is hugely anomalous in the U.S. In particular, there's no reason for anyone to subject themself to such a game. Go where people have confidence in you, rather than are skeptical. Srsly.
Upvotes: 7 [selected_answer]<issue_comment>username_2: Some universities need more teaching assistants, but cannot afford to pay for good ones. So they recruit unqualified teaching assistants as PhD students, and then kick them out when they fail their qualifying exams.
This is not an ethical practice and you should not enroll in a PhD program that does this.
At good quality universities, it is common for most students to pass their qualifying exams. At other universities, it varies.
Upvotes: 5 <issue_comment>username_3: I know that some physics PhD programs were notorious for taking on extra students (knowing that they'd likely fail the qualifying exam) and just using them for the cheap TA labor and giving them a master's degree after 2 years.
Upvotes: 4 <issue_comment>username_4: I can't speak for all disciplines, but I do not think that that program's harsh attitude is the norm in the US. I have heard from numerous sources (though perhaps they're all citing the same base source) that only about 50% of PhD students manage to finish their PhDs[1], but it was my understanding that that was mainly due to people dropping out, particularly after the PhD Candidate stage, since writing the dissertation is the hardest thing. Note that the warnings in the source I linked to are about things like picking a problematic dissertation topic, expecting too much handholding from other people, etc. I had not ever heard that it was mainly due to programs being too brutal regarding whose work they accept or reject. (At the same time, those things are connected: If you pick a problematic advisor and a problematic topic and expect other people to hold your hand more than they are willing to, you won't produce work they can accept).
There can also be issues of being able to afford being a student, such as being distracted by needing to work full-time while doing it, so of course you also want to consider how well funded you would be or would not be.
This says the PhD failure rate in the UK is only 19.5%[2], so perhaps you do have a better chance in the UK. However, each US program is different, so you would do well to ask about the failure rates of the ones you're interested in, and also ask why people are failing. (Are they submitting work and it's rejected or do they not even submit the work?) The program you're interested in is telling you "We're terrible," so I would believe them and avoid that program, but don't dismiss all US programs based on that.
1. Example source: [https://dissertationgenius.com/the-six-laws-of-phd-failure/#:~:text=To%20give%20you%20a%20dose,over%20the%20past%20three%20decades](https://dissertationgenius.com/the-six-laws-of-phd-failure/#:%7E:text=To%20give%20you%20a%20dose,over%20the%20past%20three%20decades).
2. <https://www.discoverphds.com/advice/doing/phd-failure-rate>
Upvotes: 2 <issue_comment>username_5: It's not math, but UC Berkeley Operations Research (IEOR) has a similar policy. About 50-67% pass the preliminary exams at the end of their first year of the MS-PHD program, and can proceed to the PhD program. I think you get the MS at this point, whether you pass or fail. A small number get a "conditional" pass and have to take the test the next Spring if they want to continue. Students cannot get a PhD there without getting the MS first (from the same program).
I think it's a matter of tradition and hard-ass pride: the faculty went through hell in their careers, and that seems like the right way to them. I'm pretty sure the official rationale is that the best way to evaluate whether a student is a promising candidate for doing PhD research is by putting them through the preliminary exam process. They want their PhD students to write high-quality dissertations to maintain the reputation of their department, and also because they will (ideally) be investing substantial time in thesis supervision.
I personally think it's inhumane, or at least excessively stressful, and that they should get rid of this policy, but the faculty are pretty set in their ways. I don't think the math or other engineering programs at UC Berkeley have this policy.
I'm not sure whether the nefarious financial incentives mentioned in other comments and answers hold here or not, but I don't think they're a primary consideration. (Let's just say I'm closely acquainted with some of the faculty.) And surely the explanation for this practice, in this case, is not that it's an academically substandard program.
Upvotes: 2
|
2021/06/24
| 2,174
| 9,301
|
<issue_start>username_0: In the first week of my PhD studies, another student of my advisor asked me strange and racist questions regarding my religion and country of origin. We are in Germany, and the other student is from Italy (if it matters).
I didn't say anything, because I don't know how things work here and I didn't want to be seen as a troublemaker as soon as I arrived. In addition to this, I had doubts, because I came to another country and did not know the subject I was going to study very well.
But I want to complain about this student to human resources after I graduate, or I want to tell my advisor. Which of these actions would be appropriate, if any? I have never encountered such a thing before, I don't know how to act.<issue_comment>username_1: At a German university I would expect a broad consensus that racism is bad, mkay, but not necessarily much awareness of concepts like subconcious bias, microaggression, etc. In the German public discurs, the realization that racism (other than anti-semitism) is an important issue for daily life is still an ongoing process.
To give a concrete example, the ZEIT had not too long ago an article about how conversations such as "White German: Where are you from? -- "Black German: Bremen" -- White German: No, I mean, where are you really from?" -- "Black German: ..." are problematic, and a lot of people didn't really get it.
For your concrete situation, this means that if the comments were a blatantly open racist attack on you, you can probably expect that informing proper authorities will at the least give them some stern talking to. However, if the comments can plausibly be explained by a combination of weapons-grade cluelessness plus unexamined racist cultural baggage, I'd be much more pessimistic about telling HR producing any desirable consequences. Reporting those comments after you graduate (ie a long time after they were made) is even more likely to just be ignored.
As for telling your supervisor, that will really depend on them personally. Some supervisors would really want to hear of such issues and try to improve things; for others the "don't get it" mindset mentioned above will apply. You'll probably be able to judge this better once you've gotten your supervisor to know a bit better.
To keep your options of reporting this later open, I'd advise you to write down exactly what happened. That way, the accuracy of your recollection of the event will not be in doubt if you do report it.
Upvotes: 4 <issue_comment>username_2: I am going to try and suggest what I would do from a UK perspective, which I think would apply to most European countries with protections against discrimination based on a number of protected characteristics in place (such as Germany).
I can see that you are clearly upset, and will not question your claim it was a racist remark. I would only make a difference between two cases:
1. **It was a demeaning/insensitive question or request.**
Things like addressing somebody as "little lady" (happened to me) or "drug lord" (happened to a Mexican friend).
If this kind of thing repeats, or even if it still bothers you, the first advice would be to clearly tell the other person: "This remark was completely inappropriate and I would kindly ask you not to say such things and consider their implications."
I am not trying to imply you have a requirement to *educate* anybody about how they *should* behave, but you do need to make it clear to the other party that they had acted inappropriately towards you. If the other party asks, and *you feel inclined to offer more explanation* as to why it is innapropriate, you may, but *you should not feel obliged to either*.
Unfortunately, we all do have prejudice we might not even be aware of. In the best case, this will be a wake-up call that makes them *consider the implications in their words*, re-think their actions and change their behaviour. But at least you will make it clear to the other person that you find their behaviour unacceptable.
If this behaviour does not stop after you had made this clear, the next step would be to go talk to your colleague's or your supervisor. In this case, this is the same person. The supervisor is the person with the authority to discuss the professional behaviour of his student and ask him to correct it.
The next, and final step, would indeed be to contact HR (I think?) -- your University should certainly have some services signposted on their webpages.
2. **You felt threatened by this racist remark**
Something much worse, like "your type shouldn't be allowed to be here". (I can think of much worse examples but... I don't really want to)
If you are afraid for your safety, you should not confront your harasser directly but rather get out of harms way and bring the case to the attention of the correct University and public services.
In the UK, the correct way to handle this would be to *go talk to your advisor ASAP*. While they would not be able to actually intervene in most cases, *they are required to know which University and public services can help you, and offer support in making that contact if you need it* (i.e. walk you across to campus to the right service, or be with you when you make the phone call).
In both of these cases, I would suggest you **document everything in a diary**, making note of:
* when it happened (date/time)
* where it happened (Uni/lab/campus, outside of uni?)
* what exactly was said, by whom to whom
* who else was present (and how they reacted / did not react)
* any subsequent communications you had regarding the issue (with your advisor, the other PhD student...)
*You should not suffer harassment through your PhD* and I would encourage you to take all of these steps as soon as possible. Try and confide in a friend that is at your location, or even a family member over the phone to get some confidence, if that helps you take the correct steps.
Upvotes: 5 <issue_comment>username_3: Not knowing what was said, it's impossible to give definitive advice. But you should also consider the possibility that some people are socially awkward (or have some 'neurodiverse' condition on the autism spectrum such that they don't understand social nuance), don't share your social rules regarding polite conversation (especially if they are themselves from a different culture), and are simply trying to start a conversation the only way they know how.
Starting a conversation with a stranger can be tricky. It's common to start with what you already know about them and ask them about that, because people usually like talking about themselves. If you know somebody collects postage stamps, or plays football, you ask them about that. If all you know is they originate from country X, and you don't realise race and nationality is an especially sensitive or dangerous subject (because in their culture, it isn't), then you might start with that. And if you don't know a lot about their culture or religion, bar the broad stereotypes in the media, then it's easy to unintentionally sound racist or prejudiced. They might be trying to say, 'As you can see I know very little about your culture, please educate me.'
You can tell the difference because if you start talking about your home and your culture, a racist will continue to argue and criticise, and insist on their view, and someone who just wants a conversation will listen attentively and encouragingly, and become more polite and friendly.
And if it turns out they *are* a racist, then you might also consider how your response will affect their views. If they get a friendly response, then they might consider that maybe you're not so bad after all, and maybe their prejudices are wrong. If they get a hostile response, and hounded by the authorities simply for asking a few questions, they'll consider their prejudices confirmed. They'll consider their free speech and freedom of belief to be under attack. They'll point out that *you* can be offensive to *them* by calling them a 'racist', for example, but *they* aren't allowed to offend *you*, even unintentionally. We all say things other people find offensive - 'tolerance' that only tolerates things we approve of or agree with isn't worth anything; it's meaningless. Tolerance necessarily implies tolerating views we don't like and don't agree with. We have to treat others as we expect to be treated ourselves.
That doesn't mean you have to put up with repeated and continual harassment. But if your attempts to be friendly or politely non-committal are rebuffed, and they escalate their hostile campaign, then by all means seek help to get it stopped.
If on the other hand they realised they said the wrong thing, upset you, *and stopped doing it*, then you could surely ask for nothing better and it was likely just an innocent mistake. There shouldn't be any need in such circumstances to take further action. Consider how you would feel if as a visitor to a foreign country, you asked an innocent question about the local culture, and someone thought you was being highly offensive (I assume calling someone a 'racist' in Germany is considered offensive...) and you got reported to the authorities for holding slanderous beliefs about them? How would you want to be treated?
Upvotes: 2
|
2021/06/24
| 959
| 4,326
|
<issue_start>username_0: While I am carrying out research or learning something in my field, I often find some other resources that are relevant to my domain and could help my understanding. Sometimes, while learning, I find myself going down a rabbit hole of these resources for hours, and come out feeling overwhelmed that there is so much to learn, and I know so little.
It could be a research paper that looks interesting, a textbook that might prove useful later, software applicable to my field, a simple side project, a course, articles etc. I make a list of these things (and it is endless) and tell myself that I will come back to it sometime later. But the reality is that there is simply too much to learn and not enough time.
Even if I work like a machine (which is far removed from my actual capability), I will not be able to learn everything in my field. I know that learning All The Knowledge In The World is impossible, but what about knowledge in my own field of interest, which is rather niche?
**How do I come to terms with the fact that I will not be able to learn everything in my research area?**
Do veteran researchers also feel this way, or does years of experience give them confidence in their level of knowledge?<issue_comment>username_1: Allocate a time for exploring new stuff, a time for learning new stuff and a time to consolidate your current main fields of expertise.
And allocate a time to think about how to move forward from where you are, once you know enough. The best work happens on the boundary between the known and unknown.
Upvotes: 2 <issue_comment>username_2: *Ah, the burden or mortality; how it pushes down on thee.*
Unfortunately, (or is it fortunately?) many academic fields are very big. Depending on how "niche" you want to get, you are usually going to find that the volume of material is far more than a single researcher can master in a lifetime. Worse still, as you get more experienced you will actually start forgetting stuff you had previously mastered, so the progression is often two-three steps forward, one step back.
Perhaps the best way to look at this is by comparison to your existing knowledge. If you are a new resesarcher then your present knowledge (presumably) encompasses only a small fraction of the material in your field. As you get more experienced, you will broaden your knowledge, mastering a few parts of the field and getting a reasonable level of base knowledge in other parts. If you compare yourself to an impossible ideal then that is always going to be difficult to come to terms with, but if you compare your present knowledge with your previous knowledge, hopefully you are becoming more knowledgable and more rounded as time goes on, and that will give you some satisfaction and confidence.
I can't speak for veteran researchers, because I am not at that level yet, but in my experience, years of work in the field tends to do three things: (1) you gain more knowledge of the field, and gain confidence from this additional knowledge; (2) you gain more awareness of other areas of the field that you didn't previously know existed, or more complexities and details in things you have taken for granted, giving you an awareness that the field is more vast than you thought it was; and (3) the ratio of your knowledge to your perceived non-knowledge gets smaller as you get more experience, owing to the rapid rate at which you discover new problems/complexities in the field.
Upvotes: 3 [selected_answer]<issue_comment>username_3: One trick to increase your effective range of where you can do research is to cultivate collaborators. In particular, people who have similar basic knowledge but know technical areas you do not know. For this to work, you need to work on your communication skills, most particularly talking to people in other areas about your research and their research. Attending a few talks at conference outside you comfort zone is a good way to get a handle on terminology and research trends. Sometimes you can teach another person some of the basics of your specialty in return for them teaching you some of the basics of theirs.
If, down the road, you find you are needed specialist help in the same area over and over again, you can then invest more time learning that specialty for yourself.
Upvotes: 1
|
2021/06/24
| 573
| 2,475
|
<issue_start>username_0: I was just offered a postdoc position, but my wife is pregnant, and we are expecting the child a month after starting the postdoc.
This is my second child, so I know what is required to take care of a newborn. I will need at least a couple days per week to work from home to assist my wife.
Should I tell the PI before signing the offer? I know he technically isn’t allowed to count such things against me in the hiring process, but I want to maintain a good relationship with the PI.
I am in the US.<issue_comment>username_1: I would mention this:
"I will need at least a couple days per week to work from home to assist my wife."
because it sounds like a need you will have while employed. You want an understanding you will have this flexibility before agreeing (or learn now you will not have it and can make a more informed choice). You can include the reason if you want, but I think it's good professional practice to disclose any needs/expectations before accepting an offer.
Further, most employers understand people have obligations outside of their place of work. In fact, some even provide benefits to help with these obligations. If the offer letter does not state the benefits, you can ask for clarification (i.e., 'can I work from home?' 'is there any parental leave?' 'is there health insurance?', etc.)
Right after getting an offer is the time to negotiate, and I would use it to make sure you can assist your family without repercussions at work. Plus, if your PI doesn't understand this reasonable request, it helps to know now rather than shortly after starting a new contract.
(Congrats on the offer, by the way).
Upvotes: 3 <issue_comment>username_2: You already have the offer. Telling the PI your wife is pregnant will not change that offer.
Parental leave is usually a matter of policy, not individual negotiation. Read the policy (or possibly union contract) and follow it first.
If you want to negotiate something contractually binding, which might include an altered work schedule or altered start date, you should do that between receiving the offer and accepting it.
If you simply want your PI to be understanding of the difficulties you will encounter when you have a new child, you should let them know several months before the due date. I do not know the time period between when you received the offer and the start date, so it is not clear how this relates to your situation.
Upvotes: 3 [selected_answer]
|
2021/06/24
| 540
| 2,141
|
<issue_start>username_0: Here is the timeline of what happened.
Day 1: emailed and requested a LOR.
Day 3: got a response, prof. **agreed** to sign the LOR I draft for him.
Day 20: sent an email with my draft, asked to make changes if needed, sign and email it to me.
Day 27: sent a follow-up, politely reminded about the letter.
Today is day 33, and I still haven't heard anything back from him. Every email I sent was very polite, I wasn't pushy or aggressive. I don't know what to do or what to think. This is a LOR for immigration purposes. It doesn't have a set deadline, but it's very critical for me to get it before I file my papers.
Does anyone have any advice? Did he ditch me?<issue_comment>username_1: First, the academic "clock" can run very slow. This is especially true at certain time of the year and COVID has made things worse. Conference travel used to be pretty common at this time of year, and catching up on writing and such also.
But professors generally (not universally), meet their schedule obligations provided that they know of them and will put off non essential things for higher priority tasks.
If you need something on a schedule, make sure the prof knows that schedule. Otherwise, patience is still a virtue. Don't be a pest, but a friendly reminder every few weeks is probably fine. And make sure that any deadlines are known. Be honest about them, of course.
Upvotes: 2 [selected_answer]<issue_comment>username_2: Is it possible to get another professor to write one for you? If I were you, I would ask another professor if possible, show your draft, and explain that you need it. When you ask, ask them something like, *"Would you be able to write the letter within 3 weeks?"* This way you're not setting a specific deadline, but you also make it clear that you want it within a certain amount of time.
Edit: Also, there is usually a secretary who writes these things. Maybe ask them.
From experience, I know that some professors will not respond to these e-mails until the absolute last minute, out of bad habits. Some professors take responding to e-mail more seriously than others.
Upvotes: 0
|
2021/06/24
| 312
| 1,241
|
<issue_start>username_0: I’m in a masters in applied math program at a university and I applied for the PhD entry but didn’t get it for this upcoming fall. However, I got an email from the director saying the department would like to consider me for Spring and that their all out of funding for Fall and they would like to see one more semester of grades to revaluate for Spring 2022 (got a B in 1 out of 3 classes this first semester).
So I had them roll over my application for the Spring. I really like this program but I’m unsure of how likely I am to be admitted even if I do well considering I’m on a waiting list. What do you guys think?<issue_comment>username_1: The most anyone can say is that the better you do, the better your chances. But in general, I'd suggest that you look for other options as they don't seem, from what you say, to be pursuing you. Cast a wide net.
Upvotes: 2 <issue_comment>username_2: You're not admitted until you receive the admit decision letter. As such, it would be foolhardy to not prepare a plan B. What that plan B should be is up to you, but come up with some plan such that if you are not accepted by the program in Spring, you still have something else to do with minimal downtime.
Upvotes: 1
|
2021/06/25
| 1,230
| 5,137
|
<issue_start>username_0: It took me 5.5 years to get my BSc, 3 years to get my MSc, and 6 years to do my PhD. I had a child as a teenager (so I have spent my entire adult life a mum) and we moved countries (to Germany) when I went to do my PhD. 16 years is a long time in University. Now I am looking to start my Post-Doc this fall (at the same institution) and I am trying to figure out my options. I love doing research, but I am incredibly insecure about the time it has taken me to get here. I decided awhile ago that becoming a professor is not for me... is there hope for me in industry? Or am I too old/in school for too long? This is causing me an immense amount of anxiety.
Edit: I should mention that I took around 8 months out (spent the time teaching) in between the BSc and MSc for my child (she got sick and needed surgery) and then it took something around half a year to move to Germany and settle before my start date. Furthermore, I just turned 36. The first two years of my PhD were spent with the lab under construction, and I was building up the fumehood gas lines and experimental setups. It took way longer than originally planned. So really my actual PhD work took 4 years to complete with 3 first author papers at the end.<issue_comment>username_1: ### You are doing well --- enjoy the ride
I had a somewhat similar journey to you. I was also a teen parent and went through university raising two little girls. The undergraduate program I chose was a long one (6 yrs with honours), and I also had to wait a long time for my PhD dissertation to be reviewed. I got my PhD conferred a bit over 12 years after I entered university; a bit less than in your case, but not a huge difference in the scheme of things. In my time in academia I have known one other academic who was also a teen parent, and she also had a hard slog to get her doctorate. It certainly has its challenges raising children while studying, but it is a very rewarding experience, as you would certainly know.
So firstly, don't be insecure about your circumstances and path through your higher education. You have raised a child and also trained yourself to do academic research, which is nothing to be sniffed at. The good news is that now you get to the fun part where you can work as a researcher in a university, and while your peers are just thinking about starting a family, you are already well advanced into this. As you go through academia you will probably encounter some other academics who also had interesting circumstances through their education; there are a few academics floating around who were teen parents (we were just more advanced than the rest of them!) so you are not alone. You will also find that there are many other academics who had careers prior to academia, and completed their PhDs late in life.
You don't say how old you are, so it is difficult to offer advice on limitations here, but nothing in what you have written precludes a successful academic career. There is no need to get ahead of yourself and worry about whether or not you will become a full professor. In the early stages, just enjoy having a job in the field and use your time to gain mastery over your research area. As you progress and publish more papers you will learn more research skills and you will then have a better idea of exactly what you want to do. Enjoy your post-doc position.
Upvotes: 4 <issue_comment>username_2: If you enjoy what you are doing, don't mind the pay, and don't want to be a professor, then you are doing great for you!
I suggest that you write down why you enjoy your job. When one of the naysayers comes to try to put you down for choosing this, remember your own rationale.
That being said... the reason most people don't like to stay for extended periods as a Postdoc is the pay, which is typically low. For this reason, most people prefer to go to industry, or to try to find a professorship. However, neither of those come with the independence of being a Postroc. Professors are bound to teaching and funding duties, and industry professionals are bound to the customer's desires.
As long as you are doing work even semi-relevant to industry, I think you would always be able to find a position in industry **if you want**. But the work is much much different than what you are probably used to as a Postdoc. Furthermore, it is my experience that Postdocs have a much more laid back lifestyle compared to industry professionals, which is advantageous with children (although in Germany that point might be mute).
Even direct out of a PhD, you could of course just apply to some jobs in industry to see what they are offering... Remember an interview is also a chance for you to interview the company. You do not need to accept every offer presented to you. It may also help with your self esteem if you get some enticing offers that you can turn down! Just beware that a lot of companies offer positions that they say involve research, but in truth are more tied to customer interactions. The best case is if you know somebody in the company who can give an honest assessment of the work.
Upvotes: 2
|
2021/06/25
| 1,833
| 7,945
|
<issue_start>username_0: I am a postdoc and I am mentoring a research exchange undergraduate (REU) student from another university virtually and I will say it's a bit strange. The student is not responding well to my emails or being very proactive in the research. I also have not received any paperwork about the program although I did see briefly a screen share of the expectations, and was told that they would work 40 hours per week on the project.
My mentorship philosophy is that the student should be self motivated to do their research because that is 99% of research and if they don't have that motivation to do the research it is their own fault. Also, I have never been micromanaged as a researcher so I feel I should also give that freedom to any student that I mentor.
How should I address the student? What recommendations do you have for mentoring REU students?<issue_comment>username_1: I would recommend an online meeting or phone call to have a dialog about expectations. If you want weekly progress reports, you can ask for that. If you have expectations on how quickly email is to be answered, you can bring that up. By doing this in a dialog you give the student a chance to asks for modifications. Perhaps they want a shared folder, or a discord channel. Email alone is going to be a hard way to establish a working relationship.
Upvotes: 3 <issue_comment>username_2: >
> "What recommendations do you have for mentoring REU students?"
>
>
>
Put more care into the selection of your students. There's a common saying "a bad student is worse than no student at all". **The skill of selecting students that will be the best fit for you, is something that you acquire over time, and through experience**. It's not unusual to make a few mistakes early in your career as a mentor/supervisor. For example, simply selecting someone because they have excellent grades, GitHub projects, or even publications and letters of references, can *sometimes* be a mistake: a 30-minute interview can be well worth the time and save you dozens of hours of time down the road. During that interview, make sure the student is on the same page with you (for example about what you said about your philosophy that the student is self-motivated). For example, you said:
>
> "My mentorship philosophy is that the student should be self motivated to do their research because that is 99% of research and if they don't have that motivation to do the research it is their own fault."
>
>
>
But you're talking about an undergrad student, and expecting them to be self-motivated: I agree that you and me probably were, but most are not (in fact since starting my own research institute, I've found *postdocs* who I thought would be absolutely delighted that I offered them a faculty position where they'd be able to run their own lab independently, and believe it or not, many of them say they prefer to have a project given to them! -- By the way, I learned this ***by interviewing them***, not by reading their resume and making assumptions).
For you, I would recommend asking them questions during the interview that will probe their likelihood to be the type of student that you seem to be expecting. You may find it surprising, but a lot of students who "on paper" have the highest grades or seem like they'd be the most cut out for research, tend actually not to be.
Upvotes: 0 <issue_comment>username_3: 40 hours per week means the student gets payed. You should set up a weekly meeting via zoom or Skype. During the meeting the student should tell you about any progress and ask questions. If the student does not show up report him/her to the REU supervisor (the one who pays money).
Upvotes: 2 <issue_comment>username_4: I'm on the other side of the spectrum (a student who is remotely working on research with a professor for the summer). Perhaps my perspective can help.
Working from home without ever having met the people one is working with on something that they are probably new to, can honestly be quite demotivating and I sympathize with the student. This was something I was pretty worried about as well. Academia SE gave some really nice advice in this regard (see near the end).
Some initiative that my professor took that really helps me:
Expectations and Direction
--------------------------
As stated in the other answers as well, have a meet (preferably on Zoom or Skype) to discuss your expectations of what the student needs to work on over the Summer.
Don't just describe the broad area of the research and a general direction (this is also necessary). Discuss the particular direction of the project and the work that it involves. This gives the student some clarity and not make them feel like they are wandering around aimlessly. The more specific, the better (although I understand it's often difficult to give specifics while doing research, as things can go in unexpected directions. Just try your best).
Just as important is for you to ask the student about *their* expectations with this project. What skills are they looking to learn? What is their end goal for the project? This will ensure that you are both on the same page and can work together proactively towards that goal (provided it's feasible).
Regular meets
-------------
This is really important. Have regular meets on Skype or Zoom, preferably with the camera on, to discuss their progress. This adds to the accountability for the student and it is much easier to ask questions on a (virtual) face-to-face meeting rather than email or Slack.
Socializing with the group
--------------------------
This may not be possible under all circumstances, but if it is possible, it can be so helpful! You might be part of a research group under a Professor, right? Having a weekly group meeting to give updates or a journal club to discuss papers with the entire group can be really beneficial, not just from a research perspective (if two people are working on similar projects, it helps to let ideas flow through a discussion) but also allows the student to socialize with the rest of the group and ask questions more freely. If there are other REU students, it will be nice to meet them. This *really* helps with morale.
I live on another continent to the rest of them, so for us, there is a nice inter-cultural aspect to it too.
Talk about the big picture
--------------------------
Don't just go about it mechanically, where you give the student a task, they complete it, you review it and so on and so forth. Let the student know where their work stands in the grand scheme of things. Tell them about recent developments in the domain. Encourage them to read papers and think about their relevance to their own work. Again, this helps with the motivation for the student and helps make their work more meaningful.
Seminars and Colloquia
----------------------
I am not entirely sure, but I assume if the student had been on campus, they would have been allowed to attend seminars that your department may hold? It is probably happening virtually now, so encourage your student to partake in it, even if it is not directly related to their research. All this is again part of making the REU experience more valuable and 'real'.
Things the Student Can (Should?) Do
-----------------------------------
I was really dreading the possibility that the virtual nature of the REU and a few difficult months prior (mental health wise) may completely demotivate me. I asked [this question](https://academia.stackexchange.com/questions/166504/how-to-make-the-most-of-an-online-research-internship) on Academia SE and they really came through (as usual). I am following the advice, and it's enhancing my research experience in many ways. I encourage you to share their advice with your own student. It essentially boils down to:
* Keep a log of your work.
* Type up notes of everything you learn and do.
* Have regular meets.
Upvotes: 3
|
2021/06/25
| 2,273
| 9,563
|
<issue_start>username_0: I'll soon have finished my PhD in mathematics in a not top ranked university, and as far as I see it, my thesis is not going to be especially impressive.
My goal after the defense is to either leave academia, or make a rather drastic change of fields (let's assume it's the latter).
The problem is I have no idea how to operate this change without restarting at the PhD level.
My advisor has essentially no acquaintances in the fields I'm interested in, and I tried sending a few emails to people working on those fields, but the reception was mostly cold.
A common advice (given around me and seen here) is to continue with a postdoc and try to change fields by transitioning step by step from my current fields to the ones I'm aiming for.
I see two problems with this advice:
* If your track record is not exceptional to start with, simply getting a postdoc might be hard.
* Such a neighbor-to-neighbor approach to transitioning might prove to take a lot of time, spent working on problems I'm not really interested in, for the non-guaranteed goal of eventually working in my desired field.
In **summary**, I'd like to ask for any advice on how to operate such a change.
**Edit.** More details:
* My desired field is within maths; I'm interested by a relatively wide area rather than a specific problem.
* My current work is not in some super specialized field: I have worked on different questions that don't require much background so far.
* I have read [this answer](https://academia.stackexchange.com/questions/81373/how-realistic-it-is-to-change-to-a-different-field-of-mathematics-after-phd/81385#81385) already, which is quite pessimistic but feels spot-on to me.
Nevertheless, I'd like to get some actionable advice, if possible.
* One thing maybe deserving of clarification: I'm not especially interested in staying in academia long-term, so the "publish or perish" point of view only applies to me as far as getting to the very next step.
My goal with this change of field is essentially about having the opportunity to learn maths that I'm very curious about and would like to understand, rather than career-building, I think.
* I have thought about starting with a new PhD (even applied to a program) but:
+ People don't seem to be very receptive to such an idea (“Why would you do a *second* PhD?”)
+ I'm not sure I have the energy to start such a process over again (it was quite taxing psychologically the first time, so a second might prove hard).<issue_comment>username_1: I would recommend an online meeting or phone call to have a dialog about expectations. If you want weekly progress reports, you can ask for that. If you have expectations on how quickly email is to be answered, you can bring that up. By doing this in a dialog you give the student a chance to asks for modifications. Perhaps they want a shared folder, or a discord channel. Email alone is going to be a hard way to establish a working relationship.
Upvotes: 3 <issue_comment>username_2: >
> "What recommendations do you have for mentoring REU students?"
>
>
>
Put more care into the selection of your students. There's a common saying "a bad student is worse than no student at all". **The skill of selecting students that will be the best fit for you, is something that you acquire over time, and through experience**. It's not unusual to make a few mistakes early in your career as a mentor/supervisor. For example, simply selecting someone because they have excellent grades, GitHub projects, or even publications and letters of references, can *sometimes* be a mistake: a 30-minute interview can be well worth the time and save you dozens of hours of time down the road. During that interview, make sure the student is on the same page with you (for example about what you said about your philosophy that the student is self-motivated). For example, you said:
>
> "My mentorship philosophy is that the student should be self motivated to do their research because that is 99% of research and if they don't have that motivation to do the research it is their own fault."
>
>
>
But you're talking about an undergrad student, and expecting them to be self-motivated: I agree that you and me probably were, but most are not (in fact since starting my own research institute, I've found *postdocs* who I thought would be absolutely delighted that I offered them a faculty position where they'd be able to run their own lab independently, and believe it or not, many of them say they prefer to have a project given to them! -- By the way, I learned this ***by interviewing them***, not by reading their resume and making assumptions).
For you, I would recommend asking them questions during the interview that will probe their likelihood to be the type of student that you seem to be expecting. You may find it surprising, but a lot of students who "on paper" have the highest grades or seem like they'd be the most cut out for research, tend actually not to be.
Upvotes: 0 <issue_comment>username_3: 40 hours per week means the student gets payed. You should set up a weekly meeting via zoom or Skype. During the meeting the student should tell you about any progress and ask questions. If the student does not show up report him/her to the REU supervisor (the one who pays money).
Upvotes: 2 <issue_comment>username_4: I'm on the other side of the spectrum (a student who is remotely working on research with a professor for the summer). Perhaps my perspective can help.
Working from home without ever having met the people one is working with on something that they are probably new to, can honestly be quite demotivating and I sympathize with the student. This was something I was pretty worried about as well. Academia SE gave some really nice advice in this regard (see near the end).
Some initiative that my professor took that really helps me:
Expectations and Direction
--------------------------
As stated in the other answers as well, have a meet (preferably on Zoom or Skype) to discuss your expectations of what the student needs to work on over the Summer.
Don't just describe the broad area of the research and a general direction (this is also necessary). Discuss the particular direction of the project and the work that it involves. This gives the student some clarity and not make them feel like they are wandering around aimlessly. The more specific, the better (although I understand it's often difficult to give specifics while doing research, as things can go in unexpected directions. Just try your best).
Just as important is for you to ask the student about *their* expectations with this project. What skills are they looking to learn? What is their end goal for the project? This will ensure that you are both on the same page and can work together proactively towards that goal (provided it's feasible).
Regular meets
-------------
This is really important. Have regular meets on Skype or Zoom, preferably with the camera on, to discuss their progress. This adds to the accountability for the student and it is much easier to ask questions on a (virtual) face-to-face meeting rather than email or Slack.
Socializing with the group
--------------------------
This may not be possible under all circumstances, but if it is possible, it can be so helpful! You might be part of a research group under a Professor, right? Having a weekly group meeting to give updates or a journal club to discuss papers with the entire group can be really beneficial, not just from a research perspective (if two people are working on similar projects, it helps to let ideas flow through a discussion) but also allows the student to socialize with the rest of the group and ask questions more freely. If there are other REU students, it will be nice to meet them. This *really* helps with morale.
I live on another continent to the rest of them, so for us, there is a nice inter-cultural aspect to it too.
Talk about the big picture
--------------------------
Don't just go about it mechanically, where you give the student a task, they complete it, you review it and so on and so forth. Let the student know where their work stands in the grand scheme of things. Tell them about recent developments in the domain. Encourage them to read papers and think about their relevance to their own work. Again, this helps with the motivation for the student and helps make their work more meaningful.
Seminars and Colloquia
----------------------
I am not entirely sure, but I assume if the student had been on campus, they would have been allowed to attend seminars that your department may hold? It is probably happening virtually now, so encourage your student to partake in it, even if it is not directly related to their research. All this is again part of making the REU experience more valuable and 'real'.
Things the Student Can (Should?) Do
-----------------------------------
I was really dreading the possibility that the virtual nature of the REU and a few difficult months prior (mental health wise) may completely demotivate me. I asked [this question](https://academia.stackexchange.com/questions/166504/how-to-make-the-most-of-an-online-research-internship) on Academia SE and they really came through (as usual). I am following the advice, and it's enhancing my research experience in many ways. I encourage you to share their advice with your own student. It essentially boils down to:
* Keep a log of your work.
* Type up notes of everything you learn and do.
* Have regular meets.
Upvotes: 3
|
2021/06/25
| 2,140
| 8,564
|
<issue_start>username_0: So I am in a dilemma in that a recent publication literally has used up every (legible) character on [this page](https://artofproblemsolving.com/wiki/index.php/LaTeX:Symbols#Greek_Letters). We have checked thoroughly and every single character is necessary and this many characters are unfortunately needed to avoid confusion (This is what happens when you try to combine several different theoretical areas together). We have also used up a bunch of symbols such as [stars or dots](https://artofproblemsolving.com/wiki/index.php/LaTeX:Symbols#Finding_Other_Symbols). This question is not about how I should reduce the number of characters.
So right now I am thinking of using characters from outside of the European family, such as Japanese characters (Hiragana/Katagana) or Korean or Chinese. Of course, provided that these characters are simple enough. Some candidates include ひ, と, ㅈ, ㄹ, し, 十. Some of these characters are quite suitable and have simple pronunciations, although we are not thinking of pronouncing them in presentations.
But I have two concerns:
1. most conferences and journals have a "We only accept submission in English" [rule](http://sacworkshop.org/SAC09/call_for_papers): The submission must be written in English. Does this violate that policy?
2. does using these character violate some sort of implicit cultural norm in scientific writing and European/North American conferences so that we should avoid it?
**Update:**
Thanks for all the feedback. But most seem to focus on what other fonts I should try to use instead. Just as a clarification, in my area it is highly not uncommon for the papers to use many many symbols. Here is a mild recent [example](https://arxiv.org/pdf/2106.10513.pdf) (not affiliated with these authors) and [this one](https://arxiv.org/pdf/2106.07079.pdf) I saw that made me go "wow the notation is so nice!" (again, not affiliated). These seem to be conference submissions (around 10 pages). For full submission it can go up to 20-40 pages. So as you can imagine a symbol problem quickly arises.
I can't help if everything comes out like this. If you notice, it is easy find usage of thing such as $a^{i,j}\_{k,l}$. k, and l are two agents from i and j graphs and a is just one possible variable out of many variables. So we are already making heavy use of super/subscripts. We use hats to denote estimated values so we are already there as well. We are also making use of mathcal, mathscr, mathbf, mathfrak, texttt, etc. to denote sets, graphs, matrices, special matrices and special conditions respectively. All extremely conventional usages.<issue_comment>username_1: You’re doing it wrong, and are already violating a cultural norm that’s much more important than any norm having to do with a specific choice of character set.
That cultural norm is: write papers that can be understood by other people.
If you are using all the characters in the Latin and Greek alphabets, *and* dots and stars and a bunch of other symbols so that you literally ran out of symbols to use and still need more, I am willing to bet that your paper violates this norm in the worst possible way. If you add even more characters from other character sets most people in the West are unfamiliar with, you will only be digging your paper deeper and deeper into a black hole of incomprehensibility.
Aside from this, the answer to your two more specific questions are “probably” and “yes”, but I would classify those concerns as secondary compared to the one I mentioned above.
Bottom line: if <NAME> was able to prove Fermat’s last theorem, Perelman proved the Poincaré conjecture, and countless other mathematicians and computer scientists successfully publish groundbreaking new results all the time with “only” the Latin and Greek alphabets and standard mathematical symbols at their disposal, I’m confident you too could expound your theory with those resources. So I suggest rethinking the approach behind your question and asking yourself why you need so many symbols when everyone else doesn’t.
**Edit:** another couple of observations about your suggestion:
1. The Unicode standard, widely accepted as the ultimate in standardization of text representations, defines what is a mathematical symbol, and has [several dedicated blocks](https://en.wikipedia.org/wiki/Mathematical_operators_and_symbols_in_Unicode) for those symbols (with certain standard symbols falling in other blocks for historical reasons, but still being classified as mathematical). Your idea would pretty clearly go against the spirit (if not the letter) of that standard.
2. Your idea would also go against the increasingly common idea of taking accessibility, and the needs of people using [screen readers](https://en.wikipedia.org/wiki/Screen_reader) and other accessibility software, into account in writing and publishing. Admittedly this is [also a problem with existing mathematical writing](https://mathoverflow.net/q/350662/78525), but your idea would certainly make things even worse than they already are for (for example) blind readers.
Upvotes: 7 <issue_comment>username_2: In mathematical writing, it is common to use variants like these:
>
> [](https://i.stack.imgur.com/Y9kNF.jpg)
>
>
>
and possibly others
Upvotes: 5 <issue_comment>username_3: Even with a symbol list, keeping track of so many different letters will be difficult for readers. It can be made easier by introducing order and hierarchy to the symbols.
The style will vary by field, but for example you could have:
* Uppercase letters A, B, ... for main symbols that link together the whole work and appear in multiple sections.
* Subscripted uppercase letters Ax, Bc, ... for symbols that are related (but not equal) to one of the main symbols.
* Lowercase letters a, b, ... for local parameters, which can then be reused for different purposes in different sections.
That still leaves a lot of symbols available for other purposes. For readability, you should make use of the same symbols and conventions that other papers do, within reason.
Upvotes: 4 <issue_comment>username_4: You should definitely use those characters. Academia needs to be less Eurocentric.
Upvotes: -1 <issue_comment>username_5: Apart from the other answers that a paper using so many symbols will be incomprehensible (I do entirely support these answers): many publishers use commercial custom fonts and their fonts may simply not contain characters for scripts other than latin and greek.
Even European scripts such as cyrillic cause problems. See what happened here, for example? All of the text is in Times, yet the Russian abstract is in Computer Modern: <https://doi.org/10.1016/j.hm.2020.04.003>
Upvotes: 3 <issue_comment>username_6: Use of non-Greek letters in the equations is not as uncommon as it seems. While the Greek letters are obviously the most popular, there are several commonly accepted symbols which are taken from the other alphabets, e.g.
* Russian letter ш (sha) used in the number theory and Л (el) is used in some hyperbolic geometry,
* Hebrew letter ℵ (aleph), ℶ (beth) and ℷ (gimel) denote aleph numbers, beth numbers and gimel function correspondingly,
* Old english ð (eth) is used in context of derivatives,
* Maltese ħ denote Planck constant,
* Japanese よ (yo) is used in cathegory theory.
See [this topic](https://math.stackexchange.com/questions/165368/have-there-been-efforts-to-introduce-non-greek-or-latin-alphabets-into-mathemati) for more details. I myself have seen ℵ and ð being used in the papers and I am not working in any advances mathematics.
While there certainly are some drawbacks connected with readabilty of a paper written using excessive number of new symbols, there should not be any formal problems with the publisher as long as you can write all symbols in proper LaTeX script.
As the side note, there was a mistake (as pointed by @DanRomik) in this post with incorrect naming of the Hebrew letters, what quite well illustrates the danger of using new characters which are unfamiliar both to the autor and the readers.
Upvotes: 5 [selected_answer]<issue_comment>username_7: If you really run out of symbols, I would try a different path.
In software development, people name everything by just using ASCII characters: They use words instead of single letters.
I know that in mathematics, you usually don't do that. But on the other hand, calling the cost variable `cost` instead of *c* will not make the paper unreadable.
Upvotes: 5
|
2021/06/26
| 1,164
| 4,920
|
<issue_start>username_0: Almost four years ago I asked [Are there "gig economy" resources in academic research? Are they fundable?](https://academia.stackexchange.com/q/93924/69206). Now that much of the world and its academe has endured pandemic and lock-down, I wonder if things have changed in terms of research.
**Question:** Have there been any hiring practice shifts in academia which allow for non-colocated research positions, either full time or part-time? In other words, the worker remains in one location (e.g. city, country) but is hired to participate in a research group associated with a different location. These might include some kind of "adjunct research position" (if such a construct is possible) , postdoctoral positions or even, yes gig-economy-like contracts to participate in research.
It may be too soon for any wide-scale surveys of such changes to have been done or documented, so if there is evidence of a significant change in one or more large research institutions (e.g. a university), or a policy update in a funding institution that would be enough for an answer for the purposes of this question.<issue_comment>username_1: In my university the emphasis is now on the transition back into the university. This is more complicated then expected, so there is no capacity left for developing new arrangements. So now there are lots of ad hoc arrangements that are expected to be temporary, but there has been no change in the main arrangements. I would expect that it will take a long time before that changes; the backlog is just too big.
Upvotes: 2 <issue_comment>username_2: I'll focus on the "gig" aspect primarily.
I doubt that this will occur and sincerely hope that it doesn't. The problem is that the gig economy is explicitly exploitive of its workers. It is designed that way, actually.
Those who try to make a living as an adjunct (most commonly in teaching courses), need to juggle around five different positions in order to make a minimal living. The pay is incredibly low in the US and there are no guaranteed benefits for part time workers. No health coverage. No pension. No say in university policies unless there is a union.
Some people teach as adjuncts because they already have a highly paid position elsewhere and just want some connection to students and faculty. For them, the money (or lack of it) means nothing. Some of them might get involved in research with other faculty, but don't depend on that for income. It is a hobby in some sense, though I know people who do it well. The "win" for them is in the connections, nothing more.
To try to extend this to a research career, juggling several gigs, seems impossible. The problem, at base, is that it is difficult to do real research on a part time basis. If you are working on a problem (I'm thinking math, actually), you don't normally turn off your brain to move on to the next gig after the two hours you've allocated to *this* gig. So, you wind up self-exploiting if you really do the research right and the employer is probably counting on that.
There are some positions that might be available in grant funded existing labs, especially for specialists. These are more likely to be short term, rather than part time, I'd think. Hired for a particular task within a larger one. But that has always been a possibility. And there is no possibility it can be done remotely in many fields.
---
To expand to the other aspects of the question, not that in some fields you don't need to be co-located to do research together. Many of my papers are with widely separated (several continents) colleagues. What is needed is the ability to communicate and that is possible now. But my university wouldn't have been willing to hire any of my co authors unless they moved to this location. One of the reasons has to do with law. But the bigger issue is that a university has a diverse mission. Even R1 universities have a teaching mission, including undergraduate in almost all cases. That is much harder to arrange remotely and still fulfill all employment regulations. And again, it is a special problem for part time work (in US) as such work seldom comes with the normal benefits that let you build a career in a non-exploitive way. I'd guess some of this will/does occur, but don't expect it to become standard.
---
Finally note that one of the key characteristics of a university is that it is a community. It is why universities want students (and faculty) back on campus as early as can be arranged. Partaking of that community is extremely difficult at a distance and impossible for a gig faculty worker. Thus, for students, the "college experience" is lost at a distance and the collaboration "serendipity" of the coffee lounge is lost for faculty. These are very important, even if somewhat intangible.
There is little joy or satisfaction in a completely transactional relationship.
Upvotes: 2
|
2021/06/26
| 1,729
| 7,488
|
<issue_start>username_0: My university has a rule that all students need to publish at-least a couple of papers in SCI rated journals.
A senior told me that it is better not to publish the research contribution in a single paper. The length of paper is not an issue here. The act is meant for increase in paper count and hence becoming eligible to receive the degree.
Assume that the research contribution is on a task T. If I manage to get a couple of methods that perform better than the existing methods. One is accuracy **A** and another one with accuracy **A'** where **A' > A**.
The senior asks to send the algorithm related to **A** to an SCI journal and submitting method related to **A'** after enough amount of time on a pre-print site so that it can be sent to another or the same SCI journal for acceptance.
I believe that it does not fall under the category of unfair means. No one can object if I decide to do so. But, I have doubt on whether the act is ethical or unethical to do?<issue_comment>username_1: What you’re describing is basically [salami publication](https://en.wikipedia.org/wiki/Least_publishable_unit). When done with the goal of artificially inflating your publication count and at the cost of reducing the effectiveness of the communication of your ideas and results, I think there is a strong case to be made that it’s (at least mildly) unethical. However, as discussed on the linked Wikipedia page, people who wish to rationalize such behavior have a few reasonably valid arguments they can use to deflect criticism of unethical behavior. So, while it’s certainly not the best practice, it’s also not the worst. Perhaps more than being unethical, it is a shoddy practice used by mediocre people with mediocre ambitions, and will not help a person get a good reputation.
With that being said, if your institution is setting up graduation requirements that strongly incentivize its PhD students to engage in unethical behavior, it really has only itself to blame when they end up engaging in such behavior. In that case, there is also a strong case to be made that it is the institution that is behaving unethically, and that carries a large share of the blame for any unethical practices of its students and faculty.
Upvotes: 6 <issue_comment>username_2: This is known as ''salami slicing''. It's not exactly unethical, but it's definitely discouraged.
It's most helpful for other researchers if you collect the relevant and similar results in one place where they can be conveniently accessed. Salami slicing is the opposite of this because you are taking similar, connected results and then dispersing them out and spreading them all out into separate publications in different journals (all of which need to be paid for in theory). This is clearly unhelpful and not good practice for a researcher working in a community of other researchers. Reputation is important in academia, and it will not help your reputation if it is blatant that you are engaging in this practice in order to inflate the quantity of your publications.
In general, this is an important thing to emphasise: quantity is nice, but quality of the publications is the primary thing and this is what your reputation will primarily rest on. If students are being encouraged to reduce the quality of their publications by splitting them into lots of smaller papers, then this is just bad practice.
Upvotes: 4 <issue_comment>username_3: The use of publication metrics for assessment is unethical. This approach of "salami slicing" publications to game those metrics is an entirely reasonable response to the unethical situation you and your supervisor find yourselves in.
Perhaps, further on in your career, you can fight back against the system you find yourself in and improve it for those that come after you but - for now - you should concentrate on getting your degree and follow your supervisor's advice.
Upvotes: 4 <issue_comment>username_4: In this situation, I believe, one has to worry not so much about ethics but rather about whether one contributes to the overall degradation of the quality of information exchanged through scientific publications. For an analogy, when you try to use your car less so as not to contribute to the air pollution too much, this is not the matter of ethics, it is the matter of survival.
Upvotes: -1 <issue_comment>username_5: Cynical response here - but do you want to graduate? Do you need these papers to graduate?
Does each piece of work stand on its own? Will it get published in great journals broken apart? If so - they do what you need to do to graduate, and to meet those KPIs that might be stupid but still affect employment/income/graduation.
Sure it might be nice to keep them all together, it probably would make for a stronger paper - but if it means you cannot graduate then you've gained the warm fuzzy feeling of not breaking up your work, and that far worse feeling of failing out of your degree. If it means delaying completion - what will that cost you (tuition fees, funding lost, job opportunities)?
As for ethics - it really is shades of grey. There's absolutely a balance between degrading the quality of publications and meeting KPIs. But if the papers stand on their own, and meet the scientific standards for the field, then do what you need to do. If, however, this means that you have two substandard publications, or that you need to target low-quality journals, then it's not unethical but often counter-productive career wise.
If your second piece of work shows your earlier work to be wrong, and you know this, then yes it would be unethical to publish the earlier piece.
Upvotes: 2 <issue_comment>username_6: If you send paper A to a journal and A' to a preprint server, there is a clear chance that the reviewers look at the preprint A' while reviewing A. How does it look then?
Irrespective of your institutional guidelines, its always better to put your best foot forward, while submitting a paper to a Journal. Withholding can only backfire.
**Edit after comment by Wrzlprmft**
Thank you. Well, then my answer makes a "U" turn. If the institute measures the quantum of work by the number of papers, and if you feel that the research is good enough to be split into two and still both be acceptable to a quality journal, then its a good gamble...no question of being unethical. Neither the Journal nor the University has put restrictions on the timing of the research conducted/invented.
Upvotes: 0 <issue_comment>username_7: I don't think this is merely salami slicing, but something less ethical.
Salami slicing, in my understanding, is when you have two related results which you choose to publish separately rather than in a combined paper. In that case, it is a trade-off between having two weaker papers or one stronger one, and I don't see that there is anything particularly wrong with either option, particularly when it is the natural response to some external pressure.
However, here you appear to only have one result (that you can improve the accuracy to that of A'), and to want to get a weaker version of the same result accepted before you tell anyone what your real result is. Perhaps I've misunderstood, but the fact that you talk about delaying submitting the second paper suggests that the existence of A', if disclosed, would prevent your results on A from getting published. In that case I think you would be misleading the journal by not disclosing this.
Upvotes: 2
|
2021/06/26
| 1,475
| 6,460
|
<issue_start>username_0: I have submitted a research article on a particular research problem in a Mathematics journal few days ago. But now I have been abled to answer the same research problem in a different method and hence I have prepared another article on the same research problem but in a complete different method.
Can i submit the 2nd article in another journal ?
The reason I asked this question is that every journals says that the same work should not be submitted in two different journals for publication, simultaneously.
However, my 2nd paper is completely different, just the problem is same with the 1st paper. I have seen many mathematics research papers which worked on same research problem but in different way and all are published.
That is why I think I can submit the 2nd article in another journal, as well.
Any advice or suggestion please<issue_comment>username_1: I do not see an issue with the requirement that the same article may not be submitted to two journals at the same time since based on your description these seem to be two essentially different papers. I do however see another issue: Can the novelty be sufficiently judged by the reviewers?
In particular I think it would be dishonest not to mention that the same problem has been solved in the other article. Then the reviewer can look at the other paper on arXiv (both of them are on arXiv, right?) and judge whether the second one is novel enough.
**Edit** for clarification based on the comments: I think that in both papers the other one should be mentioned and cited.
Upvotes: 4 [selected_answer]<issue_comment>username_2: If each paper cites the other, then **yes you can** submit both papers simultaneously for publication -- that is, there is no ethical or procedural problem with doing so.
However, like most things that are possible and ethical, you may or may not actually want to do it. I think it is worth some discussion of pros and cons. I will assume that you are seeking an academic career in mathematics: otherwise much of this will not apply to you (but will, I hope, to some readers). Here are some questions to guide the discussion:
>
> Are the two papers of independent interest? That is, would an expert in the field who had both in hand want to read both?
>
>
>
If the answer is *yes*, then it would be very reasonable to publish both. Two papers solving the same problem is not an issue: as you point out, new proofs of old results are commonplace in mathematics papers. (Most often new proofs are not viewed as work of the very highest level, but sometimes they are, and I'm sure we all have our favorite examples. Anyway, most published mathematics is not "work of the very highest level.") However if you are publishing a second proof of a known result, then you have to explain the novelty of the *approach* and not just of the *result* itself. But anyway, there is no question that a different proof of a result is not "the same work" in the sense that the journals are warning you about.
>
> Would it make sense to combine both proofs into a single paper? Would it be profitable to do so?
>
>
>
Sometimes the *same work* has multiple proofs of a single result. This is a good way of signalling that neither approach is clearly superior and also provides a natural opportunity for comparison -- for instance, perhaps each proof can also be made to yield further results that the other cannot. Or maybe one is shorter but the other is more self-contained/elementary.
By the second question, I mean the (perhaps difficult) one of how the community would view one paper containing two proofs versus two papers published at about the same time. This is a very "sociological" question that I don't see how to answer in a principled way: it really just depends what people believe. My understanding is that most parts of science outside of pure mathematics are much more tolerant of "parallel publication" than pure mathematics. (<NAME> quipped that the value of a mathematician is judged by taking their worst paper and dividing by the number of papers. Like any good horror story, it has just enough basis in reality to scare us.) Within pure mathematics parallelism is better received in some areas than others. But it is definitely possible that if you combine the two works you could publish both of them in a better journal than you could otherwise publish either one (and moreover, in most parts of pure mathematics, if journal A is clearly stronger than journal B, it is probably better to have one paper published in journal A than two papers in journal B).
>
> Even if one paper largely or completely supersedes the other, does the chronology of the work justify publishing both?
>
>
>
Academics are temporal beings and also temporal workers: whereas some artists can work on something for years or decades and show you only the finished product, almost no academic can get away with saying "I am working steadily on a very promising programme. I will show it to you when I've finished it and then worked out the best possible presentation, some years from now." In practice, it is usually the case that you publish things that have not been put in their final form (I mean this in a demanding intellectual sense). I worked hard for about five years with a junior collaborator (first a postdoc then an assistant professor, whereas I was already tenured). When we would get a nice result, it would really excite me and point to the next result...which I would often want to put in the very same paper that we were working on. This made my collaborator justifiably nervous. Our last two joint works were about 50 pages each, published in the same year though we had spent at least three years working on them. Ever since then I have been pursuing implications of this work, and I currently have a preprint that is nearing 100 pages. If we had waited ten years, we could have published one article or book where everything fits together beautifully. But that's not how academic work works, most of the time.
On the other hand, you describe having prepared the second article in the "few days" after submitting the first one. Perhaps you don't realize that the first article is going to spend, most likely, the better part of a year or more at various stages of the refereeing and publication process. It is really not too late to combine the works into one article if you choose. Really!
Upvotes: 2
|
2021/06/26
| 1,385
| 4,446
|
<issue_start>username_0: I wonder where I can find the number of citations the top x % of researchers of a given research field have received. I'm still specifically interested in the following two research fields: natural language processing and computer vision.
(No self-infatuation intended: I received some third-party request, but I'm fully aware of the numerous bibliometric pitfalls.)<issue_comment>username_1: Use [Web of Science](https://login.webofknowledge.com/) (paywalled).
Search by the topic ("natural language processing"). You'll reach a page with, as of time of writing, 30,348 results. You can now sort the papers by authors (in the left-hand panel), and select the top x% researchers that way. Then you pick the author at the bottom of that x%, find only the papers by him/her, and generate a citation report.
As of time of writing, there are 62,594 authors. I'm not going to download all the data, but Web of Science makes it possible to display the top 500 of them. The 500th-ranked author is JIMENEZ-LOPEZ MD, which a quick Google search finds is [this professor](http://www.romaniques.urv.cat/ca/llengua-espanyola/8/maria-dolores-jimenez-lopez). Web of Science gives her h-index as 4, number of times cited as 59, and the number of citing articles as 44.
If you need help with operating Web of Science, feel free to ask your librarian.
Upvotes: 2 <issue_comment>username_2: [Monsteriah](https://www.reddit.com/user/Monsteriah) [suggested](https://redd.it/o8jpgc)
to search by keyword on Google Scholar: this indeed allows us to get some decent estimate of the number of citations the top x researchers in a given research field (defined by a Google Scholar keyword such as "[computer vision](https://scholar.google.com/citations?view_op=search_authors&hl=en&mauthors=label:computer_vision)" or "[natural language processing](https://scholar.google.com/citations?view_op=search_authors&hl=en&mauthors=label:natural_language_processing)").
---
Examples:
* The #2000 researcher in [computer vision](https://scholar.google.com/citations?view_op=search_authors&hl=en&mauthors=label:computer_vision&after_author=bWJDACiD__8J&astart=190) ([mirror 1](https://web.archive.org/web/20210627171544/https://scholar.google.com/citations?view_op=search_authors&hl=en&mauthors=label%3Acomputer_vision&after_author=rhcAAMPx__8J&astart=9990), [mirror 2](https://archive.ph/g1Ptn)) has received 3620 citations.
* The #2000 researcher in [natural language processing](https://scholar.google.com/citations?view_op=search_authors&hl=en&mauthors=label:natural_language_processing&before_author=ecWn_88nAAAJ&astart=1990) ([mirror 1](https://web.archive.org/web/20210627171543/https://scholar.google.com/citations?view_op=search_authors&hl=en&mauthors=label%3Anatural_language_processing&after_author=Q6pqAAr8__8J&astart=1990), [mirror 2](https://archive.ph/as8kR)) has received 1005 citations.
Note: while it is very tempting to manually change the "astart" parameter
in the URL [https://scholar.google.com/citations?view\_op=search\_authors&hl=en&mauthors=label:natural\_language\_processing&before\_author=ecWn\_88nAAAJ&**astart=190**](https://scholar.google.com/citations?view_op=search_authors&hl=en&mauthors=label:natural_language_processing&before_author=ecWn_88nAAAJ&astart=190), this doesn't work and the result is misleading: the pagination will change but the display authors won't. One has to also change the parameter "after\_author".
Limitations:
* [I don't know whether there is a way to retrieve the number of researchers with a given keyword on Google Scholar](https://academia.stackexchange.com/q/170453/452) so this may not allow us to find the number of citations the top x % of researchers of a given research field have received.
* It is very tedious to look at top 2000 researchers or more because once has to scroll down and click for each 10 researchers due to the current Google Scholar user interface.
* It requires the authors to have a Google Scholar provide. Most do, at least in the computer science researchers who are [alive](https://academia.stackexchange.com/q/61689/452), but not all. I believe it also requires that the authors have indicated keywords.
---
Screenshot of the keywords in a [Google Scholar profile](https://scholar.google.com/citations?hl=en&user=UZ5wscMAAAAJ):
[](https://i.stack.imgur.com/RDwMy.png)
Upvotes: -1
|
2021/06/27
| 1,785
| 7,543
|
<issue_start>username_0: Computer Science and Engineering have a publication system that is quite different from that of other disciplines, in that conference proceedings have great importance and partially take the role of journal papers (see e.g. [this answer](https://academia.meta.stackexchange.com/a/4472/958) or [this question](https://academia.stackexchange.com/q/821/958)).
As an outsider from a neighbouring field, I can see some disadvantages of this system:
* tight time constraints in authoring and refereeing actively impact the quality of publications: a paper is not submitted or accepted "when it is good enough", but rather when the deadlines force people to act.
* there is a certain duplication of content between conference and journal papers.
* travel fees (pre-pandemic, at least) add an unnecessary component to publication costs, and raise the entry bar for researchers from some countries.
On the other hand, I find it harder to pinpoint clear advantages of the system. If you wanted to convince my field to switch to this system, what arguments would you use? What are its scientific benefits? How does it lead to a better research ecosystem?
This system is relatively recent, so one would hope that it solves some issues with the older model.
Feel free to challenge the frame if you disagree with some of the disadvantages stated above --- that would answer my question, too.<issue_comment>username_1: This will be a personal perspective, but I hope it sheds some light, First, the fact that CS conferences come in series (usually annual) means that the deadline question isn't quite so limiting. There is always next year's conference.
But, the more important thing, in my experience, is that the periodic meet ups of people in a small sub field or special interest group aids collaboration and synergy.
I travelled to a lot of conferences all over the world for several years. Those get-togethers let a few of us with similar interests brain-storm, ask questions, explore ideas that might be developed, and just reconnect personally, making the ongoing collaboration more interesting. We also got to meet new folks that might be interested in what we were doing.
Some of those conferences (such as those sponsored by ACM) also have special working groups that appeal to a small part of the attendees. Students, in particular, can meet some of the superstars and can be introduced to the members of the working circle of, say, their advisor.
Both the social and intellectual components are valuable in this system. I don't know if we value collaboration more that in some other fields, but the conference cycle certainly enhances it.
I would miss it if it were otherwise. Even though internet communication enables quite a lot of what we do.
---
Let me address two of your concerns directly.
Tying publications to conferences keeps the flow of new ideas fresh. The publications, coming quickly after results are obtained, gives people something to work with to, hopefully, extend insight and find new potential threads of research. The papers are "more than abstracts" and present actual results that can be built on. Big ideas don't get trapped in a potential multi-year journal acceptance cycle, though some results have been in process for years.
The ACM digital library contains a ton of stuff. It is the go-to resource for conference proceeding for stuff that they sponsor. So, while things can be obscure, not all of the important stuff is.
Some of those annual conferences are held in places where it is difficult for locals to travel elsewhere. This is intentional, to help bring the community together.
Upvotes: 2 <issue_comment>username_2: I can only really answer specifically for machine learning research, but the purpose of conferences is to avoid the long and inconsistent review times, and the possibly tedious and slow revision cycle, of journal publications. Depending on the journal, receiving the reviewer response can take anywhere from 1-2 months to about a year. And the average paper goes through 1 cycle of revisions, so we can basically double that timeline, and that's assuming the revision ends in acceptance, which is unknown until the end.
For the CS conferences, there's a guaranteed, no uncertainty, 4 month turn around, and that 4 months includes 1 round of revisions.
So the advantage is pretty clear: a faster and more consistent publication process.
As for the disadvantages you list, I don't really agree with any of them (I won't go into detail why).
I will point out one thing I do think is a disadvantage: the hard page limits imposed by the conferences (typically like 7 or 8 pages, including tables and figures). This is presumably a consequence of the hard deadlines, so that reviewers aren't taken hostage by a long paper. There are a lot of interesting figures made way too small just to meet the page requirement. You also see lots of papers where lots of proofs and details are moved to the appendix. Now this is mostly fine, but sometimes you feel like you need to have two different documents open (the paper and the appendix) and be going between them repeatedly to understand the content. I also feel like it definitely encourages papers to be as brief as possible and avoid exposition, which makes it confusing unless you are already an expert in the subject (which raises the question: how is one to become an expert in the first place?). Lastly, the page limit precludes review type papers.
Upvotes: 5 [selected_answer]<issue_comment>username_3: >
> tight time constraints in authoring and refereeing actively impact the quality of publications: a paper is not submitted or accepted "when it is good enough", but rather when the deadlines force people to act.
>
>
>
In the field of natural language processing (NLP), rolling reviews have been introduced earlier this year, which hopefully will mitigate this issue: <https://aclrollingreview.org/> Also there are a decent amount of conference deadlines throughout the year <https://aideadlin.es/> (but admittedly there are some periods lacking deadlines, e.g. ~June 15 to Nov. 15 this year in NLP).
>
> there is a certain duplication of content between conference and journal papers.
>
>
>
The main NLP conferences explicitly state the maximum overlap requirements between the paper submissions and past published work, [e.g.](https://2021.aclweb.org/calls/papers/) ([mirror](https://web.archive.org/web/20210628021159/https://2021.aclweb.org/calls/papers/)): "In addition, we will not consider any paper that overlaps significantly in content or results with papers that will be (or have been) published elsewhere. Authors submitting more than one paper to ACL-IJCNLP 2021 **must ensure that submissions do not overlap significantly (>25%) with each other in content or results**".
>
> travel fees (pre-pandemic, at least) add an unnecessary component to publication costs, and raise the entry bar for researchers from some countries.
>
>
>
True but journals often ask for money from either the authors or the readers. They extort billions of USD each year worldwide from universities: [Reference for annual journal subscription costs paid per university?](https://academia.stackexchange.com/q/29923/452). They are some exceptions e.g. [Journal of Machine Learning Research](https://en.wikipedia.org/wiki/Journal_of_Machine_Learning_Research), so the issue isn't intrinsic to journals though, just [bad old habits](https://academia.stackexchange.com/q/51730/452).
Upvotes: 0
|
2021/06/27
| 433
| 1,468
|
<issue_start>username_0: One may specify some keywords in one's Google Scholar profile, [e.g.](https://scholar.google.com/citations?hl=en&user=UZ5wscMAAAAJ):
[](https://i.stack.imgur.com/RDwMy.png)
How can I see the number of Google Scholar profiles that have a given keyword? E.g., the number of Google Scholar profiles that have a given keyword "[computer vision](https://scholar.google.com/citations?view_op=search_authors&hl=en&mauthors=label:computer_vision)" or "[natural language processing](https://scholar.google.com/citations?view_op=search_authors&hl=en&mauthors=label:natural_language_processing)".<issue_comment>username_1: One cannot see the number of Google Scholar profiles that have a given keyword, unless willing to click on the `>` button to go to the next 10 profile links for as many times as need to reach the last page (> 1000 times). One may encounter some human verification box from Google as well on the way.
Upvotes: 0 <issue_comment>username_2: I think it's possible.
You'll find this python code very handy for this challenge: <https://github.com/WittmannF/sort-google-scholar>
You will be able to identify keywords, and rank your results (e.g. by citations/year).
If the Captcha process gets in the way, use this workaround:
<https://medium.com/analytics-vidhya/how-to-easily-bypass-recaptchav2-with-selenium-7f7a9a44fa9e>
Good luck.
Upvotes: 2 [selected_answer]
|
2021/06/27
| 497
| 2,035
|
<issue_start>username_0: I was planning to submit a couple of posters to a Computer Science conference because there are some final experiments that I should perform with the data we have at hand. I wondered if there would be an issue if I submit the complete work to another conference and citing the first conference in which I present the work as a poster, would there be some issues regarding the level of novelty required for a conference. Any thoughts?
Thanks<issue_comment>username_1: As long as you cite the earlier work, you can submit it without issue. But it is up to the conference committee to decide if the "complete work" is sufficiently novel, given that the earlier version already exists. For many such things it would be fine, but no one but the committee can say anything about acceptance.
But, with proper citation, there are no ethical issues such as self plagiarism.
But you should also consider copyright if you have given it up. If so, this limits how extensively you can copy from the earlier work into the new and you may be restricted to quoting (literally) brief passages or paraphrasing (with citation).
The more extensive the changes are, the more likely that you will have success. Just make sure it ins't perceived as "old news".
---
Note that there may be overlap in the conference committees.
Upvotes: 1 <issue_comment>username_2: In the field of natural language processing (NLP), the main conferences explicitly state the maximum overlap requirements between the paper submissions and past published work, [e.g.](https://2021.aclweb.org/calls/papers/) ([mirror](https://web.archive.org/web/20210628021159/https://2021.aclweb.org/calls/papers/)):
>
> In addition, we will not consider any paper that overlaps significantly in content or results with papers that will be (or have been) published elsewhere. Authors submitting more than one paper to ACL-IJCNLP 2021 **must ensure that submissions do not overlap significantly (>25%) with each other in content or results**.
>
>
>
Upvotes: 0
|
2021/06/27
| 346
| 1,490
|
<issue_start>username_0: I'm reading ML papers where nearly every reference is on arxiv. It would be really nice if clicking on a footnote directly opened the paper, rather then sending me to the bottom of the PDF. I haven't been able to find anything using Google, but would be really surprised if nothing of the sort existed...<issue_comment>username_1: Perhaps not *exactly* the functionality you want, but the [arXiv Bibliographic Explorer](https://github.com/mattbierbaum/arxiv-bib-overlay) from [arXivLabs](https://labs.arxiv.org/showcase/) may be useful. It is a browser extension that allows you to view the references and citations for a paper on the arXiv without having to e.g. search for the title of the paper on Google. If you have the PDF opened in one tab and the arXiv page in another, then you can quickly find hyperlinks to relevant papers through the search functionality.
I'm not aware of any software that can modify the target of a PDF hyperlink after the PDF has been compiled. At least in my field, the formatting and style of references is not standardized on the arXiv so I think it may be difficult to find a tool that can parse the necessary data in the general case
Upvotes: 1 <issue_comment>username_2: As an alternative I ended up finding: <https://www.arxiv-vanity.com/> which renders arxiv links in HTML. It does show a direct link for every reference on hover! Unfortunately the rendering is quite bad sometimes, hopefully it keeps improving.
Upvotes: 0
|
2021/06/27
| 763
| 3,447
|
<issue_start>username_0: I'm a first year PhD student and my research is related to Quiver representation and recently I have been looking at semi- invariant rings in the context of Quiver representation.
Sometimes I feel like nobody cares about this particular topic that I'm working on. I recently attended a conference where many eminent mathematicians from this field of representation theory were present and I felt like nobody really cares about semi-invariant rings for quiver representations.
My question is:
1. Will this make it difficult for me to find a post- doc position?
2. Will it make it difficult for me to do research in the future, because people might not value what I'm doing
One thing that I would like to add is that I find this field and semi-invariant rings for quiver representation interesting and I like studying about it.<issue_comment>username_1: The more tightly you focus the search for a post-doc tied to any specific research thread, the harder it will be to land a post-doc, just because there are fewer opportunities in narrow areas. You need to be flexible in the short term. You can use mathematical skills and insight to do many things.
To get autonomy you need to first have a secure position. Tenure track is probably secure enough in most fields, but in some, you really need tenure before you are free. (Fields that touch on political questions, for example.)
To keep your interest in your current specialty alive, spend some time on it periodically while you are doing things to please some PI. Take a lot of notes. Especially notes about questions that might later be explored. Once you have the needed autonomy, the world is yours to explore.
---
I finished my degree in the era before ubiquitous post-docs, but my field was so narrow that there were only two or three people outside my working group that were qualified to understand my research or interested in it at all. And, I wasn't interested in moving halfway around the world anyway. Had I been too narrowly focused at that moment, I'd have failed to move forward. But I had deep insight that I could apply to other things.
Upvotes: 1 <issue_comment>username_2: There are at least one or two dozen professors at research universities who are interested in semi-invariant rings of quiver representations.
That's as many as you will find for almost any specific research topic in mathematics.
I wonder if you went to the right conference. Representation theory is a big area, and of course most representation theorists won't be interested in semi-invariants of quiver representations - but that will be the case for almost any topic in representation theory.
In addition, there are another several dozen professors (including me! - but we don't have postdoc positions) who might be interested in your research if it has implications in their area (using known connections between quiver representations and their area).
Now if I were your advisor, I would indeed be worried that I was leading you into some dead end that no one was interested in. Hopefully your advisor is a better mathematician (in this regard) than me and has some good reasons to suggest the problems you are working on, including some good reasons to believe connections to related areas might be uncovered.
How geographically mobile you are for a postdoc also matters; some countries are more interested in this stuff than others.
Upvotes: 3 [selected_answer]
|
2021/06/27
| 726
| 2,920
|
<issue_start>username_0: I'm a 3rd year undergraduate student. I was thinking of making a side project where I analyze the effects of different vaccines on the rate of new cases. I will be using methods I've learned from my recent Regression Analysis class.
I'll also be following the formal structure of a research paper (Abstract, Introduction, Methods, Results, Discussion, References, etc). When I'm done, I'm thinking of asking a statistics professor to review my findings just to check I've done all the tests right.
It will be put on my resume for future internship applications. Is it correct to define it as a "research paper" to my employers if it's not being published anywhere and the only author is me?<issue_comment>username_1: It seems risky to put something on your CV that no one has vetted. If this were a course project and a professor could attest to its value then you'd be on safer ground.
But you can always list the project itself (not the paper) as "Work in Progress" on a CV with a view that it will be published eventually.
Some things that have turned out to be very important have been done and then put away in a drawer somewhere only to see the light of day decades later. <NAME> did that with his early work on The Calculus. But its value was only obvious after it became public. Newton didn't claim, when he put it in the drawer, that he had done something significant. That only came later.
If you want to put it on your CV as a research *paper*, I suggest you submit it for publication. But the project is still valid as work in progress in any case.
Upvotes: 5 [selected_answer]<issue_comment>username_2: No, the correct term for an unpublished work is "manuscript."
Reference: [What are the boundaries between draft, manuscript, preprint, paper, and article?](https://academia.stackexchange.com/questions/13089/what-are-the-boundaries-between-draft-manuscript-preprint-paper-and-article)
Upvotes: 3 <issue_comment>username_3: Yes, it is a research paper. The term "research" only means the text is a result of research. A paper that is published is "published paper". You can also call your paper a *preprint*. It is a more specific way to characterize the paper.
Upvotes: 3 <issue_comment>username_4: If you have done work/research, then any paper coming out of this, a report on it, is literally (a report on) research.
Similarly, if you put it on-line in any way, it is literally "published".
The loaded and dubious sense of "published" these days is "accepted in a peer-reviewed situation". :)
So, in my opinion, your paper is research. If you make it publicly available it is literally published, though not in the status-enhancing sense of passing gate-keepers/referees/editors. :)
Depending what you want to convey about your work, taking these contemporary conventions into account should surely allow you to avoid any accusations of deception. :)
Upvotes: 1
|
2021/06/27
| 1,908
| 8,228
|
<issue_start>username_0: When I start reading a research paper in my research area, I flooded with a lot of new words, techniques etc.. In the majority of cases, I start reading the **relevant** textbooks for expertise in order to understand the research paper well, instead of going to research papers again. I am not sure whether it will work, but gives it confidence in the topics under research.
It is well known that textbook reading may take lot of time and energy. Continuity can also be an important factor while comprehending a textbook.
At least in my university, studying research papers is much encouraged by experts or professors. And supervisors, known to me, suggest textbooks to read very rarely. I am not sure about the practices of other countries.
---
What is the role of textbook reading for a PhD researcher?
Is it underrated? Why do some supervisors ask for reading research papers only, and don't give explicit priority to textbook reading?
Do PhD students of top universities has habit of reading textbooks?<issue_comment>username_1: I am an established academic and I read textbooks. I read introductory texts when moving into new areas, and more targeted texts as reference material. I also watch YouTube videos - if someone else has taken the time to put information in a context that is easy for me to absorb why not use it?
What you should be considering is what you will get out of the text. Textbooks are usually great for general background information or to overview a field. They can also be helpful to learn specific techniques. Sometimes, there'll be a text that perfectly introduces what you need, and does it well. When that's the case - use it.
As a newer researcher you need to make sure that you are reading in a structured and organised way, not just randomly trying to absorb everything and equating reading with advancement. Sometimes textbooks are useful. Other times they are not. Just make sure you are reading with purpose.
Upvotes: 6 <issue_comment>username_2: Reading whole text books takes a lot of time and should only happen in cases were the book covers your PhD topic or closer field to a large part. Beside that, I would limit reading selected sections or maybe chapters, to understand common knowledge.
Usually text books teach you common results most researchers would agree on. They try to be comprehensive and teach the reader. Papers discuss new approaches and ideas, which might turn out to be insufficient or plain wrong. Over the time the idea of a paper might be embedded in a larger context or be viewed from a different angle, which might make it easier to understand.
You have to do both as a PhD student:
* you have to get the common knowledge, that should happen in your Bachelor or Master studies. There your read text books. As a PhD student you should know the terms and common knowledge and read less text books.
* you have to understand the current research in your field and compare your own work this the work of others. For that you read papers. In most cases, you don't have to understand every detail, but you should get the idea. Maybe you have to learn to skip understanding these details. Only papers that are very close to your work, you have to understand bit by bit - because you have to explain what distinguishes your work and why your approach is worth investigating.
Upvotes: 3 <issue_comment>username_3: 1. A really well-written book or monograph can reduce the time required to get the core content from up to a dozen or more research papers.
2. A deep research paper *can* have ideas hidden in it which even the experts (and sometimes authors!) have missed.
One of the ways one succeeds in research is by finding ideas which others have missed. Such insights are unlikely to be found if one follows a well-laid out development such as is often seen in textbooks.
On the other hand, there are many forests worth of research papers. You could easily miss the trees if you spend all your time in them!
An approach that has been useful personally (for mathematical topics) is to make notes on assumptions while reading research papers and go back to those notes if one looses the track completely. The assumptions could then be followed up through textbooks. Similarly, while reading texts, it can be quicker to avoid reading "line-by-line" and fix "rough" arguments on one's own. If a sufficiently wide gap in one's understanding is detected, one can go back and read the relevant section in detail.
Upvotes: 3 <issue_comment>username_4: I rather suspect this is very much down to the subject being researched. As someone who has a Ph.D in Chemistry I would have said that 90% of my research needs came from published papers and monographs as they were probably the most relevant source. Chemistry textbooks would give you a valuable insight into the fundamentals of an area but research would be far more up-to-date.
Upvotes: 1 <issue_comment>username_5: This depends very much on what you're trying to learn, but you should never rule out the right textbook (or any other tool you can use to acquire knowledge, for that matter).
In quite a few cases, if you're brand new to a field you may not even be able to follow research articles until you've hit the textbooks. If for no other reason, in early papers in a field notation hasn't even been settled upon. Newer papers will tend to use consistent notation, but probably wouldn't provide definitions for the unfamiliar.
Also, some stuff you need to learn to round out your education and help you communicate with those adjacent to your space. I did my PhD in a systems neuroscience area, and was woefully inadequate in neuroanatomy at some point. It would have been a huge waste of time to learn neuroanatomy from the literature when there are dozens of good textbooks -- and I used the textbooks. I may not have the understanding of a practitioner in the field, and my knowledge isn't nuanced, but I'm not a neuroanatomist -- I'm a neuroscientist that needs familiarity with neuroanatomy.
Upvotes: 3 <issue_comment>username_6: There are many different qualities of textbooks, after all! :)
Many are exactly imitations of older (not necessarily good, but successful) books, perhaps with better graphics, or some other superficial changes.
Some have resonated with for-profit publishers' ideologies, and are heavily promoted.
Etc.
Even among those which aim to serve their subject, there is often a tendency to be toooo encyclopedic, which has some virtues (for reference purposes), but does certainly heavily mask story lines.
And, yes, sometimes there are monographs ... which can be exactly what you need, if they hit your interests, or can be completely orthogonal to your goals.
It is highly non-trivial to gauge the quality of a book without looking through it in some detail (not to mention having an idea about the relative competence of the author). So, for myself, I've bought (out of my own pocket, supposedly from my clothing budget... :) thousands of books, most of which are high-end textbooks. I've looked through every one, searching for potentially amazing ideas that were previously unknown to me...
EDIT: Likewise, I certainly do also look through a great many on-line preprints for the same reasons...
Yes, some expense, and one of my activities is to try to replace some ridiculously expensive textbooks by my own lecture notes, in several subjects.
But/and both for my own purposes and for purposes of competent exposition, I do need to know whether I'm *missing* something... especially in fields where I supposedly am expert. :)
True, not everyone can skim through books or large papers quickly. So a strategy that requires that may be infeasible for some. I'd hesitate to "excuse" it, though, as though not doing so were essentially irrelevant.
Yes, I do essentially require my own research students to read most of my own notes, as opposed to explicitly requiring reading of "official textbooks", but this is less forgiving than it might sound, since my own notes do cover quite a bit. And, my people seem to be inclined to look at the standard textbooks in any case.
(This is in math, at an R1 state school in the U.S.)
Upvotes: 2
|
2021/06/28
| 2,138
| 9,513
|
<issue_start>username_0: For my master's thesis, I completed a systematic literature review and supplemented it with surveys and patient interviews. The topics required no medical information of patients.
My supervisor was aware of both the surveys and interviews and ethics board was never brought up.
All participants completed a consent form, but I understand now an ethics review may be needed. My submission is soon ... is this an auto fail?<issue_comment>username_1: It is impossible to say because we don't know the rules of your institution. But the answer is it **probably should be**. You need to immediately tell your advisor and be prepared for bad news. I'm sorry your program failed you on this.
Upvotes: 5 <issue_comment>username_2: If you were in a field that tends to do research with humans you probably would have taken some sort of course that explains to you research ethics procedures and such, and so you would have known this was a needed step. I had dozens of versions of this training as a student even though I was only doing animal research at that time. It seems like you have at least some familiarity with research ethics if you've gathered informed consent and such, but as you are now finding that is not enough.
Ethics reviews for research aren't an easy process, and you definitely need some mentorship your first time through, so it's a failure by your advisor to not shepherd you through things and at a minimum to let you know you need this training.
It's likely your work can't be published if it wasn't conducted under proper ethical procedures, but that doesn't necessarily impact your *graduation*. Whether you "pass" or "fail" is completely up to some academic unit - your advisor/committee/department. We won't be able to tell you what their decisions will be, you'll have to work with them. *Start working with them as soon as possible* to find out what your next steps are.
Good luck!
Upvotes: 3 <issue_comment>username_3: As stated, conducting research on humans without prior ethical approval is a grave transgression. The work becomes unpublishable because no journal would touch it (as a matter of universal policy) and you are open to formal censure.
However, this depends on nuances of your country, institute/department and possible categorisation of your work. Not all human research is ethically equal, and this is why a supervisor's guidance is crucial. *Their* academic well-being should be on the line over the activities of their mentees, and yours failed to do their job, but that's a rant for another time.
What follows does ultimately hinge on supportive engagement of your supervisor, so it is doubly dismaying to hear what sounds like distancing noises from them. If they are jettisoning this very professionally hazardous mess on to you then I am sorry, and I wish I could say it was rare.
In UK Psychology departments provision is often made for "generalised" ethical approval based off the Principal Investigator (your supervisor), or prior approved research from which the present does not materially deviate in ethical terms. There might well be rules which say no, or only cursory further approval is required for a study which essentially extends a previously approved study, i.e. by deploying the same methods to study the same topic/cohort such that no new ethical considerations are raised. If your supervisor is lucky (these days), they might have wangled blanket ethical approval for a whole class of prospective experiments on this rough basis: same kinds of methods on the same wholly mundane participants for roughly the same ends. Some kind of special remit is also typically extended to final year undergraduate project students, given there are so many and their projects are often trivial recapitulations of wholly harmless designs reused yearly by the supervisor in this way. (Note that those are usually very strongly stipulated *conditions* of being able to go this quicker route, which still entails prior ethical approval just through a faster cut-down process, and a signed statement of taking responsibility from said supervisor).
Ultimately then, the slow and onerous ethics committee approval process will probably include allowances to make specified cases of especially low-risk research more expedient, and although the bounds of these are very tightly specified because of the gravity of ethical risk, many PIs are wont to exploit the letter of the law (academics are professional arguers after all). As I've said they typically do so for two reasons: to circumvent the sheer onerousness and slowness of the full approval process, or to retrospectively render disorganised and messily planned research ethically approved. I have seen these rules creatively interpreted in the service of either goal far more often than those writing such rules would like.
Your case might fall into the second category, and it should be clear here why the benevolence and alliance of your supervisor is key. I have seen supervisors protect their juniors (and their findings) from the mess that you are in, following similar disorganisation and poor mentorship, by finding a way to frame the research conducted as coming under the remit of previously granted ethical approval. Perhaps this rogue study can in fact be argued as a mere extension of a previously approved study. Perhaps this study's ethical ramifications are sufficiently trivial that it could fall under some aforementioned triviality type allowance. Significantly, all of this is meant to be obviated by the iron imperative to resolve any such approval ambiguity *before* conducting the research - but perhaps there are only one or two, non- or back-dated bits of paperwork that need be collected to bring the study back into defensible territory.
**None of this is at all on the table unless the supervisor wants it to be however, and if they don't, the mere suggestion of this to them will be responded to as a grave professional hazard. It sounds unfortunately like your supervisor specifically might prefer what is now smeared on you, to remain smeared on you alone.**
I saw this kind of thing happen as the not-unusual solution to a full approval process being too long for certain deadlines, valuable juniors (or their findings) being jeopardised by the kind of situation you've been allowed to get into, other forms of craven programme disorganisation, or combinations thereof. Is this whole thing very uncomfortable if not inappropriate? I invite you to read some other responses on this site regarding how much inappropriateness is forgiven, when it is not a mere junior in the firing line.
Upvotes: -1 <issue_comment>username_4: I need to disagree with the hard-line answers saying that you are just plain out of luck. **An ethics approval is *not* always required with human subjects research.**
In the United States, DHHS maintains [a set of flow charts for decision-making about human subjects research](https://www.hhs.gov/ohrp/regulations-and-policy/decision-charts/index.html). Most relevant to this question, there is an exemption that specifically addresses "survey procedures, interview procedures", as long as the subjects are not identifiable or there is no reasonable risk associated with disclosure of identity. Similar exemptions may apply in other countries as well.
From what you have written, it sounds like your supervisor believed that such an exemption applied to the work that you are doing. Otherwise, from sheer self-protective reflex they would almost certainly have brought up institutional review processes.
I would suggest that you bring this up with your supervisor from this perspective.
* If you can document that your work is already covered under an appropriate exemption and complies with your institution's processes, then your worries may be over and you can simply go on using your nice, consented data set. From what you have written, this sounds possible, though it is impossible to know without knowing the full details of your circumstances and your institution's regulations. Your advisor might even have gone through an institutional process without telling you, if it was just a routine exemption.
* If the exemption does not apply, but there was a reasonable case to believe that it did when you began, then it may still be possible to bring it to the institutional review board as an emergent situation, again depending on the particular circumstances.
* If both of these fail, you may indeed need to discard your data and begin anew, but your advisor should work with you to see if the situation can be salvaged first.
Upvotes: 5 <issue_comment>username_5: **Talk to your committee and your advisor. Submit to IRB now.**
It sounds like you were genuinely unaware that your University's IRB board had to sign off on the experiment. Talk with your advisor and figure out the best way to submit to IRB after the fact.
You may be able to get after-the-fact approval. This is a situation where your advisor shoulders a good portion of the blame. It's their job to shepard you through the process, of which IRB is a part of.
The worst case is you'll have to leave the survey info out of any published work. If your field doesn't regularly involve humans as subjects in tests, it's possible your advisor genuinely didn't know about IRB approval. IRB boards are made up of individuals - they may be convinced this was an honest mistake.
**Do not publish on this work without IRB approval**
Upvotes: 1
|
2021/06/29
| 713
| 2,519
|
<issue_start>username_0: I get plenty of scam mail, but the scam mail from predatory journals has a unique habit of using odd substitutes for normal roman letters. For example;
>
> You Are Inᴠіtҽd to Pυbliѕհ Your Original Ɍҽsҽαrch with Us
>
>
>
Other scams don't do this. They just make their appeal in normal characters. For example;
>
> URGENTBUSINESSPROPOSAL
>
>
>
Bad grammar, but no funky letter substitutes.
So what are the funky letter substitutes meant to achieve? Is it supposed to evade a spam filter? It seems like it would be really easy to filter for, because no normal person does that. Is it supposed to look more credible? How could that look more credible?
I am fascinated and mystified.<issue_comment>username_1: "Unicode-obfuscation" (link to [pdf](https://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.95.9653&rep=rep1&type=pdf)) is a common spam-filter evasion technique that is not unique to publishing scams. Your sample is not representative, so it only *appears as if* other scams don't use the technique as well. (Anecdotally, I also receive publishing spam that does not use unicode obfuscation, almost every day.)
Upvotes: 7 [selected_answer]<issue_comment>username_2: [This paper](https://arxiv.org/pdf/2004.05265.pdf) (Arxiv PDF Link) argues that it is indeed done to prevent spam filters.
From their conclusion:
>
> Moreover, we tested this method with a Microsoft Business email.
> We first sent an email containing a lot of keywords frequently encountered in spam emails, and this email was flagged as spam. Then
> we sent the same email, with some of the characters replaced by
> their “visually equivalent” characters from Cyrillic alphabet, and
> this email was delivered to the Inbox. This suggests that this method
> can currently bypass existing spam filters.
>
>
>
Upvotes: 6 <issue_comment>username_3: You already describe the goal: Evade the spam filter.
The question is why this letters, and as brought up in the comments, why not homoglyphs that cannot be distinguished.
There are two answers, which are related to each other.
* A spam filter may already be trained to detect a homoglyph unicode e in a word that uses ascii letters and may not be trained for the substitutes you're seeing.
* You are only seeing the successful spam mails, due to the [survivorship bias](https://en.wikipedia.org/wiki/Survivorship_bias). Spammers are trying all kinds of tricks and the trick in this e-mail is a trick that is not filtered effectively, yet.
Upvotes: 2
|
2021/06/29
| 1,102
| 4,437
|
<issue_start>username_0: I will be applying to study mathematics in university soon, and in order to choose which universities I should apply to I am looking at university rankings among other factors.
However, many of the university rankings that I have looked at often differ significantly, eg placing a university 10th in the world in one website and placing it 30th in the world in another website.
Is there a ranking of universities that I can rely on and that is widely accepted by academics? I am particularly interested in university rankings for the UK, as that's where I live and so I'll most likely study there.
Thank you for your help.<issue_comment>username_1: There's no "official" rankings because who would have the authority to decide on what metrics make a university the "best"?
Only a body that was appointed by universities themselves would have this authority and given that each university has a different focus, getting them to agree on a set of universal metrics to be assessed against would be nigh impossible.
That being said, the closest thing you'll get to an "official" ranking is probably [The World University Rankings](https://www.timeshighereducation.com/world-university-rankings/2021/world-ranking#!/page/0/length/25/sort_by/rank/sort_order/asc/cols/stats) collated by The Times. This is done in conjunction with Elsevier which gives it some semblance of authenticity.
---
A bit of advice for you specifically: university rankings don't really matter. I turned down a place at Cambridge to study part-time at Aston Uni because I preferred the lab-based approach that Aston takes to engineering degrees and I wanted to jump straight into industry. I don't regret this and I believe this hands-on experience has been far, far more useful in the long-term now I'm in industry. Furthermore, no hiring manager cares what university you went to. The days of "we only hire from Russel Group Universities" are long, long gone.
Choose your university based on where you'll be happiest and where you think the teaching style aligns most closely with your long-term goals. I'd only be concerned about ranking if they're far, far down at the wrong end. In which case look into why that's the case and decide if that matters to you.
Upvotes: 3 <issue_comment>username_2: There are [three](https://en.wikipedia.org/wiki/College_and_university_rankings#Major_international_rankings) widely-followed world university rankings: the THE, the QS, and the ARWU. You'll find discrepancies between them of course, but given they have different methodologies, it's not surprising. Make sure you know what you're looking for and what's important to you before using them. For example, the ARWU relies heavily on Nobel Laureates & Fields Medallists as staff or alumni. If you don't care about the natural sciences or mathematics, then these things are irrelevant, and the ARWU might be less applicable to you.
I am not as familiar with regional rankings, but my understanding is that the [US News and World Report](https://en.wikipedia.org/wiki/U.S._News_%26_World_Report_Best_Global_University_Ranking) rankings is widely followed within the US, while in the UK, there are another [three regional rankings](https://en.wikipedia.org/wiki/Rankings_of_universities_in_the_United_Kingdom) (*The Complete University Guide*, *The Guardian*, and *The Times/The Sunday Times*). Once again they use different methodologies and so reach different results. To [quote](https://en.wikipedia.org/wiki/Rankings_of_universities_in_the_United_Kingdom#Disparity_with_global_rankings):
>
> The considerable disparity in rankings has been attributed to the different methodology and purpose of global university rankings ... International university rankings primarily use criteria such as academic and employer surveys, the number of citations per faculty, the proportion of international staff and students and faculty and alumni prize winners ... The national rankings, on the other hand, give most weighting to the undergraduate student experience, taking account of teaching quality and learning resources, together with the quality of a university's intake, employment prospects, research quality and drop-out rates.
>
>
>
Hopefully you can see the difficulty in ranking universities, and why different people look at different rankings. You'll have to decide what is and isn't important to you.
Upvotes: 3 [selected_answer]
|
2021/06/29
| 886
| 3,739
|
<issue_start>username_0: I am fairly comfortable with English as a second language and have been studying at schools with English as the primary language for the last 9 years. I also have multiple semesters worth of experience TA'ing for CS courses in English (conducting lab sessions, giving short lectures etc.).
I recently took TOEFL and obtained scores around 28-29 for reading, writing, and listening sections.
However, I received a 17 in the speaking section. Since I can normally speak English comfortably in casual and academic settings, I believe this low result was due to the automatic grader interpreting my stutter as a sign of inadequacy.
How much would a low grade in the speaking section of TOEFL affect my overall profile when applying for a CS PhD in the US?
Thanks for any answers in advance.<issue_comment>username_1: Given your past experience (as a TA), I'd guess it won't matter at all. Presumably you have developed some coping/compensation strategies to effectively deal with it.
Perhaps you should indicate in a cover letter that you stutter, that it probably negatively affects an oral test score, and that you are happy to demonstrate that IRL you can communicate more effectively than the test indicates.
The US is also pretty good about not discriminating against people with such issues. A lot of that is enshrined in law. See [this](https://www2.ed.gov/about/offices/list/ocr/frontpage/faq/disability.html) for example.
But long term anything you can do to become an effective speaker will serve your career interests. But that is true of everyone.
Upvotes: 0 <issue_comment>username_2: I was in a very similar situation, except that it was the IELTS rather than the TOEFL for me, and that I wanted to apply to PhD programmes in the UK, rather than the US. It all worked out fine.
The role of a language test in PhD admissions is, in my experience, a binary measure. An applicant needs to be sufficiently proficient to be considered; but you cannot outcompete another candidate by being better in the language. It also seems to be rather common to handle the language certification towards the end of the process by making a conditional offer that needs a sufficiently good language test to be converted into an unconditional offer.
Thus, I believe that if your application material is otherwise strong, you can expect a conditional offer with the requirement to submit a better language certificate. At that point, you get in touch with the department and talk them into waving the requirement (actually talking rather than emailing probably works better, as they'll see that you can actually speak English and just stutter). That's how I did it.
Upvotes: 3 <issue_comment>username_3: As in a comment by @GoodDeeds: in my U.S. R1 large state univ, my math dept has a cut-off of 23 for the spoken part of the TOEFL (=test of English as a foreign language), regardless of the rest of the application. Sometimes a 22 can make the cut, if faculty advocate for the applicant. Most of our grad students are supported as TAs (teaching assistants).
Although I've been involved in our grad admissions for a long time, I do not recall ever seeing a claim that a low speaking TOEFL score was due to a stutter or similar. I don't think we have any system in place for detecting such a claim, either! ... so an application with such a TOEFL score might not get read closely enough for any admissions people to see the explanation, etc.
To have admissions people think about this, you'll need to make a prominent point of it in a cover letter or elsewhere. And/or a direct communication with the Director of Grad Studies in the dept you're applying to, alerting them to the issue.
Upvotes: 5 [selected_answer]
|
2021/06/29
| 1,032
| 4,336
|
<issue_start>username_0: After my PhD, I continued with a postdoc in my PhD lab. The topic and the numerical method was quite different than my PhD and I had to read up quite a bit of literature to start the new work. However, as I was also involved in other projects as well, I took around nine months to complete the work. While writing the paper, I have discovered that I have done a major mistake in my simulation parameters and model. The work cannot be published unless I correct the mistakes and run the simulations again which would take at least a month more.
I was asked to submit the manuscript for review by the advisor and other collaborators by next week. However, now I can't write the paper with the errors. And I have to rerun the entire work again.
What do I do now? I am almost a year into my postdoc, and I shouldn't have made such errors. How do I approach this and what do I tell my advisor and collaborators?<issue_comment>username_1: It's always best to find errors as early as possible. You've found this one *before* rather than *after* publication, so that's a big win. There's nothing magical about any step in one's academic career that makes them immune to making errors: everyone makes mistakes, no matter how many degrees they have or years of experience or papers they've published. It's a fair bet that the most senior researchers have made the most errors, simply by having had more time to accrue them.
Let your advisor and collaborators know what happened, and start running the corrected simulations as soon as possible. If you can update them with a timeline of when it would be feasible to submit the new results, then do so.
They may be disappointed by the delay, but they'll prefer this rather than having their names on a paper with a fundamental flaw.
Upvotes: 7 <issue_comment>username_2: I have made plenty of mistakes in my career. Some real doozies. And I have been witness to many mistakes made by colleagues. I have found that confessing to the mistake and accepting blame and responsibility turns out the best. Even in cases where blame could be shared if you accept blame, apologize, and be honest it works out better. People will tend to attack someone who is defensive and tries to blame others. That's when they really pile on. Sure they will be frustrated and disappointed. But if they see that you feel bad and are working to fix it, they will have more sympathy and be more on your side.
Upvotes: 3 <issue_comment>username_3: This happens, and you just have to deal with it. A year and a half ago, I discovered a mistake I had made only *after* spending approximately 15 CPU years on computations. It led to a delay of about 6 months in the paper, during which every core of my collection of computers was busy recomputing the statistics I needed -- but at least I had confidence in the correctness of the submitted material.
In practice, the delay is often not terrible because only the computer is working on re-doing all of the calculations. While it's doing that, you can focus on the next research project, and once you have the new numbers, whatever little work is necessary to adjust the tables and graphs in the paper. In other words, little *work* is wasted, just time.
Upvotes: 4 <issue_comment>username_4: There are already several good answers to the broader question, I just wanted to focus on the run-time aspect:
>
> The work cannot be published unless I correct the mistakes and run the simulations again which would take at least a month more.
>
>
> I was asked submit the manuscript for review by the advisor and other collaborators by next week. However, now I can't write the paper with the errors. And I have to rerun the entire work again.
>
>
>
Not knowing the nature of your work, I have no way of knowing whether this is an impractical suggestion, but - is it possible to reduce the rerun time by parallelising your workflow? For instance, by asking your colleagues to run some of the simulations on their own computers, or renting some cloud computing time?
Upvotes: 3 <issue_comment>username_5: There's a silver lining to your dark cloud, but *only* if you 'fess up: by admitting your mistake you are announcing to the world that you can be trusted and are a person of integrity, even under potentially embarrasing conditions.
Upvotes: 0
|
2021/06/29
| 1,005
| 4,238
|
<issue_start>username_0: How do you credit people for reviewing a manuscript prior to submission? I am referring to individuals with more knowledge and experience than myself, such as a professor. Where in the manuscript do you list their names? Is this good practice or frowned upon?
I was thinking maybe under the title such as:
>
> Prediction of new active COVID-19 cases using regression analysis
>
>
> Authors: ..., ...
>
>
> Peer-reviewed by: person1, person2, ...
>
>
>
Also, I was wondering if "peer" review has to be from people of the same age as you. In case of them being older than you, do you just call it a review and not a peer review?<issue_comment>username_1: Actually, it is unlikely that you will learn the names of reviewers for a journal if those are the people you mean. So, in most cases this would be impossible.
Moreover, the version you submit is almost certainly not the version that will be published. There will be opportunities to update lots of things, including acknowledgements and such.
Finally, you can include a footnote or brief passage in Acknowledgements thanking "the reviewers" (not by name) for their suggestions (which there will almost certainly be.
Note that reviewers do much more than just make a yes/no decision about the paper. They might call for extensive revisions and suggest how that might be done.
But if, prior to submission, you ask people you know to review your paper and give you suggestions for improvement, then you can name them in an acknowledgements section and thank them for their help. Advisors usually get listed in this way, for example, unless they are co-authors.
And if you are going to name someone and thereby associate them in some way with the work, you need their permission to use their names.
Upvotes: 3 <issue_comment>username_2: **Meaning of peer review**
When people say "peer review" in regards to a scholarly work, they are implying they refer to the "[scholarly peer review process](https://en.wikipedia.org/wiki/Scholarly_peer_review)", not to some generic review by peers in the general sense of the meaning of the individual words "peer" and "review".
This process refers to review by your *scholarly peers*, not related to any age group, conducted though journals/publishers who assign reviewers to assess the paper and make editorial decisions about whether to publish the paper or not based on their recommendations.
Looking over your manuscript before submission is **not** "peer review" in the academic sense. You don't want to say that your paper went through "peer review" just because you had some peers (or even people who are you senior) look it over.
---
**Acknowledging help on a paper**
It may be appropriate to thank people who have helped improve the manuscript but haven't contributed enough to qualify for *authorship*; this section in papers is called the "acknowledgements" section. It can include statements like:
>
> The authors thank <NAME> for helpful comments on a draft of the manuscript.
>
>
>
Sometimes authors will also thank people from the peer review process anonymously if suggestions raised in peer review are particularly important for the final paper, but it's not necessary or typical to thank reviewers in general.
It's polite to ask permission to acknowledge someone by name before actually mentioning someone in the paper (they may feel uncomfortable with it and ask to not be named if they either don't support the work or feel they didn't do enough to warrant it), and it's certainly not good to try to add acknowledgements because you think it will somehow help your paper get accepted (nor because you think it will make your non-published preprint appear to have been somehow validated and judged as correct by experts), only when people genuinely deserve the credit.
Upvotes: 7 [selected_answer]<issue_comment>username_3: You usually add the acknowledgements after acceptance. Then you can add a section like
>
> **Acknowledgements**
>
>
> We like to thank the anonymous reviewers and our helpful colleagues for proof reading and <NAME> for providing us with the example data.
>
>
>
Especially the name can only be added after a blind review.
Upvotes: 2
|
2021/06/30
| 420
| 1,792
|
<issue_start>username_0: I want to hire someone to do partial research writing for me, particularly the introduction section. I will do the rest of the work. All the practical work is completed. This is non-graded work. It's not related to university or college work. It is just an independent work.
I want to know if this is ethical. Should I add the writer as a co-author for this minor contribution?<issue_comment>username_1: The introduction of your paper situates your work in the broader literature. In your introduction you make intentional decisions about which works to highlight as influencing the rest of the paper, and frame what you've done in a broader context of the field you are writing in. It isn't *minor*, it's a key part of a whole paper.
I don't see a practical way to "farm" this out to anyone else. To have someone else write a *good* introduction for you would take way more effort than it would to write yourself, regardless of language challenges.
It's fine to hire someone to do copy editing; an introduction of a paper is not copy editing and isn't minor. If your work is worth considering by someone else, it's worth writing a proper introduction for. Do it yourself.
Upvotes: 4 [selected_answer]<issue_comment>username_2: It's ethical if you give the contract writer due credit for their contribution; it's unethical if you don't give them due credit. How much credit is "due credit" depends on the policies of the journal to which you submit. If it's a journal that has adopted the ICMJE authorship guidelines verbatim, then you shouldn't add them as a co-author, but you should mention their name and what they contributed in the acknowledgements.
But as @BryanKrause points out, even in the case where it is ethical, it may not be a good idea.
Upvotes: 1
|
2021/06/30
| 590
| 2,516
|
<issue_start>username_0: I had a PhD interview for a position but I had to cancel it because I had an offer from another one. However, I didn't accept the offer and now I would like to be interviewed for the first position. How could I ask them for this?<issue_comment>username_1: You can always ask in a contrite manner, trying to find an explanation why you declined them in the first place and explaining why you are now agreeing to join them.
I would be wary of a student with such indecisiveness and possible flightyness, but if they have a substantial number of positions and your credentials are strong, you may be in luck.
I pick up on username_2's comment, there might be also another viewpoint: if there were concrete circumstances or newly arisen facts that triggered your change in decision, it is strongly recommended to mention them. If there is a good reason for you to reconsidering your choice, it is important for the committee to see what your thought process was in changing your decision.
Upvotes: 3 <issue_comment>username_2: First, and most important, if you don't approach them then you get no happy resolution.
Second, you are entitled to your own choices and priorities and no one should be offended by that.
Third, probably nothing you do will guarantee success, but you can try to put yourself back in the game, at least, just by indicating that you would welcome an interview and an offer.
I would simply say that my circumstances are now different, without going into detail or apologizing for my earlier decisions. They may assume that they weren't my (your) first choice and might ask about it. It is always good to be honest in answering questions, but you need not try to anticipate them and answer them preemptively.
We, maybe you also, don't know how highly you were ranked earlier or whether they still have an opening. In the best case they will welcome your continued interest.
My advice, then, is not to go into detail about why you are now interested but be prepared to give an honest answer if asked. I doubt they would be surprised by any candidate having other options in such a case, nor by the candidate wanting to explore them.
You were honest with them originally, I think, not stringing them along. Perhaps they can appreciate that. But there are no guarantees.
So, I agree with [username_1](https://academia.stackexchange.com/a/170543/75368), that you should prepare an adequate explanation, but I don't think you should express it unless asked.
Upvotes: 3
|
2021/06/30
| 1,103
| 4,776
|
<issue_start>username_0: I am currently doing my PhD in Economics in a UK university and have a couple of working papers —one of them almost finished—. Each of these working papers corresponds to one chapter in my PhD Thesis, which is in the field of Game Theory (Theoretical Microeconomics). I am expected to complete my PhD Thesis with a total of three chapters. This month, I am presenting one of my chapters in a couple of relevant conferences in my field, and have been asked whether there is an available working paper that can be checked (and perhaps circulated and cited). Whether one should make his/her working papers public is a question that depends on the field one is working on. To be honest, I do not know what to do because I do not know what is the standard practice in my field.
Some time ago, all my working papers were available on my personal website via a public Google Drive link. However, I decided to make them private because I was afraid of getting my ideas stolen by other researchers. Now I do not know what to do. On the one hand, I see the benefits of making my working papers public: I may get useful feedback and may also get my paper some attention before it is published in a peer-reviewed journal. On the other hand, I’m afraid of other researchers publishing my work under their name, and I am also afraid of reputable journals rejecting my submissions on the basis that they have been published as working papers prior to being submitted to the journal.
I currently have three options:
* To make them public in my website via a Google Drive public link;
* To make them public by uploading them to [arxiv.org](https://arxiv.org/) (or some similar repository);
* To keep them private and unavailable to the public.
**With this post, I would like to ask fellow theoretical economists what they think I should do.** I am inclined to upload my working papers to [arxiv.org](https://arxiv.org/), but I do not know whether reputable journals in my field will reject my submissions because of that. I am also not sure whether this is a useful measure at all against the theft of ideas.
I am a newbie in academia, and therefore any help will be much appreciated.
Thank you all very much for your time.<issue_comment>username_1: Publishers seem to work under one of three (at least) models.
Some will prohibit prepublication and won't consider anything that has previously appeared. My guess is that this model is slowly disappearing.
Some journals encourage preprints of articles where the author(s) upload the preprints to some site. In mathematics, for example, this seems to be current practice pretty widely.
Some journals will, themselves, upload papers to a preprint service before final versions are finalized.
Separately, many journals will publish final versions online as well as in print.
But, for any paper you expect to submit to a specific journal, you need to explore what their policy is. Perhaps there is a (more or less) standard for economics, but don't make assumptions. Explore the journal's website or ask an editor directly.
Of course, "work in progress" can mean a lot of things. It may be that a paper based on that work is sufficiently different from what was previously visible that it becomes less of an issue for a journal.
Upvotes: 2 <issue_comment>username_2: Many researchers in theoretical economics publish their works on ArXiv or similar venues. No semi-reputable researcher will dream of stealing ideas that were already published in a preprint. This is of course not an ironclad promise - there exist bad actors that will absolutely steal your ideas if you put them up online, but these instances are *very* rare.
To ensure that this does not happen, I strongly suggest that you
1. Position this work as yours, by presenting it at seminars, workshops and the like.
2. Work towards getting your ideas published sooner rather than later.
3. Discuss your work on social media: putting a link to the ArXiv version on twitter with a few words describing your work is excellent both for outreach and for establishing your ownership.
In the long term, disseminating your work is a strictly dominant strategy over keeping it to yourself. This is especially true in a relatively fast-moving field like game theory: I think that the risk of having someone else scooping your work and publishing faster than you is much greater than the risk of someone stealing your work outright.
Put another way, the field of game theory (and most other scientific disciplines, I would imagine) is populated by a lot more brilliant, hungry researchers who will scoop you than unscrupulous actors who will steal your work. In that kind of ecosystem, it's much better to publicize your work.
Upvotes: 4 [selected_answer]
|
2021/06/30
| 408
| 1,837
|
<issue_start>username_0: I have communicated one research paper of mathematics in a journal. Now I want to talk on that paper in a seminar/conference just for a presentation and getting a presentation certificate (probably), but not for publication.
Is it ethical ?
Any suggestion please.<issue_comment>username_1: Perfectly ethical. You are discussing your own ideas. Even in the case that the journal already has copyright to your paper (probably not here), that doesn't mean that they somehow "own" the ideas.
Even if your remarks were published, it would be fine, though some journals object to prepublication of the paper. But that only applies to the paper - the specific expression of the ideas - not the ideas themselves.
The "certificate" is irrelevant to the ethical question here. Good for you if you get it, but the action is ethical in any case.
Upvotes: 3 [selected_answer]<issue_comment>username_2: Presenting your results at a seminar is not a violation of any ethical academic norm. Conferences are slightly different: you should find out whether the conference will have a published proceedings.
If your work will be peer-reviewed and published as part of the conference proceedings this may violate the journal's publication policy. In particular, you won't be able to authorize the journal to publish the work as that privilege has already been given to whatever conference you had given it to by letting your paper be part of a published proceedings (you will need to actually sign this when you publish).
Normally, these are questions you can just ask the conference organizers and they'll be happy to let you know how to proceed. Some conferences allow the publication of an abstract/short paper that just summarizes the results, and thus does not constitute a violation of most journal policies.
Upvotes: 2
|
2021/06/30
| 679
| 3,023
|
<issue_start>username_0: I'm head of a search committee and will be checking references for finalists for a fixed-term position. Two finalists worked with the same non-profit and gave the names of different people who work there. When I am talking to one person at the non-profit, is it okay to ask what they think of the other person too, or should I ask their opinion of only the candidate who specified them? If it matters, I am in the United States.<issue_comment>username_1: There may be law about this, but I would consider it improper unless you ask the candidate first for their OK. Otherwise, you are using a non-official, potentially unfair, informal, "off the record", process to help choose candidates.
You have a defined process. You should stick to it, even if not required by law.
---
Thinking about the law, defining and publishing one process while using another, might be construed as a kind of fraud, even if it would be difficult to charge.
Upvotes: 3 <issue_comment>username_2: In Academia, you do not disclose who has applied for a job unless you have the candidate's permission. Since academics work in teams on long-term projects, sometimes they have to keep their job search a secret. If they do not, they may be excluded from teams.
Ask the candidate for permission before contacting any references the candidate did not provide.
It is common to even ask for permission before contacting references when the candidate provided the references in their application.
Upvotes: 6 [selected_answer]<issue_comment>username_3: **Stick with the reference you were given**
Are you going to track down ancillary references for all the candidates? You want to be as fair as possible in your faculty hiring. Getting extra references for only a few people is unfair to both the people getting extra references, and people who didn't get a chance to provide an extra reference.
If the extra reference is great, that will likely sway your opinion of the candidate. Other candidates may have plenty of great reference, but you didn't ask them. If the extra was a bad reference, that'll change your mind too - even though you didn't ask anyone else to produce another great reference, or go reference hunting for them.
Job hunting is tough - especially for academics. By asking extra references, you could end up tipping the current organization off about someone's job hunt, which could jeopardize their current job. It also might be illegal (I'm not a lawyer).
Upvotes: 5 <issue_comment>username_4: Unless your search process description (*and* the job advertisement !) explicitly state that you will solicit opinions from further, secret to the candidate, references, I would stay away from doing so.
It is a thing that makes lawyers panic about the potential of failed candidates suing the university --
at my institution (large state university in the US) I imagine such an action would be considered sufficient procedure violation by the HR department to lead to the search being stopped.
Upvotes: 2
|
2021/06/30
| 765
| 3,151
|
<issue_start>username_0: The UCAS student [finances page](https://www.ucas.com/finance/undergraduate-tuition-fees-and-student-loans#what-financial-help-can-you-get) that the undergraduate tuition fee loan is paid directly to the insititution.
>
> Tuition fee loans, to cover the full cost of your course, are paid directly to the course provider, and you won’t have to pay it back until after your course, when you’re earning above a certain level.
>
>
>
And that you do not have to have a confirmed offer before applying for the loan.
>
> Applications for courses starting in 2021 will open early 2021, for a loan to cover tuition fees (paid directly to the university) and maintenance costs (paid directly into your bank account at the start of term). This is available wherever you choose to study in the UK, and is repayable. You don’t need to have a confirmed offer of a place at uni to start the process.
>
>
>
How does this work in practice? As a student I have a first and second choice institution, both offers are grade dependent. Do I apply for two separate loans for each institution and wait to see which offer I accept?<issue_comment>username_1: I am familiar with the process for postgraduate taught degres. For that case if you wanted the tuition fees to be directly paid to the university you would have to have secured the loan and finished the actual process before being officially enrolled, but accepted an offer. Then the funds would be directly paid to the university. As a result, it would require that you have an offer and a UK bank account at most by August.
However, tuition fee loans can also be paid to your personal account and then you can transfer the money to the university by using the payment methods they provide.
The admissions office of your university could provide further info and are generally really helpful (from personal experience).
Upvotes: 0 <issue_comment>username_2: You apply for a student loan for yourself, not in relation to any particular course.
Once you register for the course at the university where you have a confirmed place it triggers the release of your loan and maintenance payment. It is normal for students to have a first choice and second choice university through UCAS. That is how it works.
If you do not get a place or do not start a university course then no loan money is paid to you or the university.
Yes: you do need a confirmed offer to start the process, but that does not need to be the university you eventually go to, as you may go elsewhere through clearing.
So:
1. Apply to several universities and courses of your choice through UCAS
2. Hopefully get some offers back
3. Choose an offer as first choice and one as second choice
4. Apply for a student loan
5. Wait for the exam results (if applicable)
6. If rejected by your first and second choice, apply to clearing
7. When a university place is found, await their student registration process
8. Register and get your student card
9. Your loan is activated
If a confirmed offer is not required, then step 4 happens after step 1, otherwise no different!
Hopefully that is clearer?
Upvotes: 2
|
2021/06/30
| 2,816
| 12,190
|
<issue_start>username_0: I'm a first year PhD student and I was asked to think about a problem last week and I tried to work out some examples, did some computations, but I wasn't successful. Basically, I didn't really make any significant progress in the last week. Today I had a meeting with my supervisors and he immediately computed that example and it turns out that the computation was easy. There was no trick or anything, just plain and simple linear algebra. Now I'm filled with regret that I should have been able to do it on my own. Now I'm having this feeling of self-doubt. I feel like my supervisors might think less of me. What would you suggest I should do?<issue_comment>username_1: *"You wouldn't worry so much about what others think of you if you realized how seldom they do" --- <NAME>.*
You can relax; for experienced academics, the baseline expectation is that most new graduate students (including ourselves at that age) are/were basically incompetent. That is the reason we give you 4-5 years of training before we let you out in the world to do research on your own. It is unlikely you fall significantly below this low baseline expectation. I suspect you are just overestimating what academics think of the competence of the average grad-student, so don't worry; you are probably not significantly more incompetent than the others. What you observe as an "easy" computation done by your supervisor is the result of decades of training and experience in the field, which allows him to immediately identify classes of problems and solution methods that you do not yet grasp automatically.
In any case, I suggest you don't worry yourself about embarrassment with your supervisors and just work on the practical aspect of plugging skill gaps. Review the problem you had trouble with and identify why you were unable to identify the solution method. Do some practice problems if needed, and brush up generally on the material until you feel that you are able to comfortably solve problems of that general class. Your supervisors will/should tell you if you are significantly behind where you need to be at your stage of the PhD program, so if they haven't said anything, you are probably about where they expect you to be. If you are unsure, just ask your supervisor how you are tracking.
There is no need for any lingering sense of embarrassment. Most likely, your supervisor would have just been briefly reminded of the general incompetence of grad-students (and may have wistfully reflected on his own incompetence at your age), and then he would have gone onto something else and forgotten all about you.
Upvotes: 7 [selected_answer]<issue_comment>username_2: Well, reminds me of some math exam I had in university including LA and my father asked me how it went. It was ok overall but I wasted considerable time on one task and got about half of it proved. So he asks me what it was, and I reiterate "Prove that the eigenvalues of antihermitian matrices are purely imaginary". So he does that thing where his view glances off to that secret blackboard in the sky and comes back and says "but that's trivial". What?!? So he writes down 5 lines and after secondguessing his notation (theoretical physicists use different notation than electrical engineers) and brooding over it for half an hour, I have to admit it's trivial.
As a side effect, you can probably wake me up in the middle of the night and ask me to prove that all eigenvalues of a hermitian/antihermitian/unitarian matrix are real/imaginary/something and I'll be able to do it before even waking up.
Stuff like that happens. If you had learnt it the regular way (chances are you did), you'd likely have forgotten about it again.
No way are you going to forget this again I'll bet. And it appears to be part of your trade's advanced tools.
So, good for you. At least if you didn't blow any deadline, and even then it would just be a temporary setback.
Upvotes: 5 <issue_comment>username_3: I won't say "don't worry about it" and I have anecdotes also, but let me give you a bit of perspective. Two main points.
The first is that "insight" in mathematics isn't general, but specific to some area(s). You can have great insight into one area and very little in another. For me, I had deep insight into classical topology and analysis, but very little in algebra. Ring Theory was especially difficult for me. I could follow proofs just fine and solve exercises, but couldn't go beyond that.
The second main point is related. It is a very different level of skill in math to be able to follow a proof than to generate one. An even higher level of insight is needed to be able to posit what might be true and is worth exploring.
It is the purpose of grad school (especially) to help you develop in both of these areas. You are still early in your studies, so it isn't especially worrisome at this point, though your advisor might have a different idea.
A minor point, but perhaps also important here, is that one doesn't normally simply "see" all possibilities for solving problems and sometimes early attempts actually prejudice against seeing others. Stuck in a rut, so to speak.
If this were your third year, I'd be more concerned. And if you couldn't follow the argument of the professor in solving the problem then I'd be concerned. But, the more you work at it, the easier you will likely find such things unless you work on harder and harder problems. In that case, there is a constant struggle to obtain insight - just as there is in obtaining wisdom. Carry on. The quest is worthwhile.
Upvotes: 4 <issue_comment>username_4: Another anecdote.
When I was a first year Phd student in mathematics, I took a topology course. One theorem we covered was the [Tietze extension theorem](https://en.wikipedia.org/wiki/Tietze_extension_theorem). I remember thinking about the proof for hours and hours, and feeling that it was completely opaque to me. I could verify each line, but I had no idea how anyone would have thought to put these ideas together in this particular way to prove the theorem. I couldn't make an outline of the proof. I would have had to memorize it basically verbatim to have a chance at proving it. I remember making a decision that this was too hard, that I would memorize the statement of the theorem and move on with my life.
Five years later I am close to getting my degree in a field which isn't really connected with point set topology. I have not thought about this theorem in years. A professor friend of mine is teaching this same topology class, we are having a conversation while he is walking to class, and I end up there. I decide to sit in. What do you know? The topic of the day is my old nemesis: the proof of the Tietze Extension Theorem.
My friend is asking the first year grad students questions very patiently, trying to get them to guess at how to prove the theorem. What I discover is that, at this point, the structure of the proof is obvious. Given the definitions of the structures involved, it feels like there is only one obvious strategy and verifying the details should be completely routine. It is!
Although I had not thought about point set topology for a few years, I had obviously grown a lot as a mathematician. What had been an impassable mountain had become a molehill.
I think this kind of experience is fairly common. You are literally learning to see a new world with new eyes. As a new grad student, you are like someone who has opened their eyes for the first time. The shapes and colors are overwhelming. Nothing makes sense. If you keep your eyes open, and keep playfully exploring, you will find that you orient to this new world and what is baffling now will become obvious.
Upvotes: 5 <issue_comment>username_5: To build on username_1's answer, I'd just like to emphasize the "ask your supervisor" part.
*Especially* as an early grad student, it is important to develop a relationship with your advisor where you can admit things you don't understand, and ask them "dumb questions" so they will be better able to teach you and help fill those gaps. You are their student after all.
Graduate school and academia often have a highly deleterious effect on peoples' confidence, motivation, and feelings of self-worth. I have found personally that one effective strategy (among many!) to deal with this is to tell yourself that *you are fully capable* (e.g. I have no doubt you are fully capable of doing basic linear algebra), you just need to learn (1) what tools/information/skills to learn and (2) when and where to apply them. Those things are very often not learned by developing them from scratch by yourself, instead you will get them from textbooks, papers, and most importantly: **by asking questions**.
If you don't understand something, it's almost always best to just ask, rather than stay quiet and continue to not understand or even misunderstand. Your advisor *should* (in principle) be your first stop when it comes to asking questions. You should feel comfortable asking them about anything you don't know, because either (1) they know it and can explain it to you (or give you a reference to learn it), or (2) they don't know it and can point you in the direction of other resources where you could figure it out (which might mean new research opportunities!).
Having a good relationship with your advisor is important. Not every advisor is equal in this regard, but in the ideal situation you should be able to admit ignorance to them, ask them questions about anything you don't know, and maintain a clear channel of communication about their expectations and your progress as a grad student. (They can't accurately assess your progress if you hide things you don't know from them)
And of course asking questions doesn't only mean to your supervisor, I try to take any chance I get to ask questions about things I don't understand if I think the person I'm talking to might be able to clarify something for me (sometimes that means using Stack Exchange). Academic culture unfortunately doesn't do a great job of fostering positive learning environments where people are comfortable exposing their own ignorance in order to grow. But I assume that you have embarked on a graduate degree with a certain level of passion and drive to learn something, and I suggest taking any and every opportunity to do so, even if it means admitting you don't know something, *even if you think that everyone else thinks it's trivial*. You will often find that there is either a better perspective that makes it more "obvious" to you why it is trivial, or that it isn't so trivial as you had assumed. (Seriously, think of a classroom or conference setting, at least half the people probably don't know or were even wondering the same thing as you and were afraid to ask, so you're doing everyone a favor by asking)
Also, making friends and finding *other grad student* (or better yet, postdocs) who you can talk to and ask shameless questions to is hugely beneficial as well.
Upvotes: 4 <issue_comment>username_6: You didn't tell us anything about your supervisor. Does s/he already have a Ph.D.? If so, it would be silly to compare yourself to them, since they are by definition the one training you because they are the one who already has an advanced degree. Even a more senior graduate student has many hours of focus on research under their belt that you don't. Sometimes it takes lots of training to see the simple answers.
(There's a cartoon I can't find right now, I think xkcd, that shows how various people see some complicated theory. The undergrad sees a simple circle with a couple squiggles. The grad student sees a mass of equations. The postdoc sees a heap of experimental equipment. And the professor sees a simple circle with a couple squiggles.)
If you are worried about whether or not you have the capabilities to complete the program, compare yourself to other students at a similar stage in the program. Even then, give yourself some leeway - there will always be a lot of variation among students. Keep an eye on where you are relative to your cohort a few times a year.
Upvotes: 2
|
2021/07/01
| 749
| 3,115
|
<issue_start>username_0: I've been wanting to do some research on some quite controversial topics for a while now to stay educated on the common talking points of today, but every time I want to look at a research paper, I wonder whether the people who conducted the research are biased, or whether the journals are biased or whether the method that they used was flawed.
With the controversial topics, I tend to think that the researchers are generally more prone to skewing their results than if it wasn't a controversial topic. However, I'm still a big rookie at this, and I have no idea how to quickly ensure that the journals are legitimate, the researchers are legitimate, and they aren't biased etc. I don't want to spend all my time making sure that they're legitimate then not have time to actually look at the study itself.
Is there supposed to be some kind of method they teach in university or is it all self-learned? Because I'm not in university yet and university seems like the place where people start publishing papers and doing research for those papers, and also my country has mandatory enlistment for close to 2 years before I go to university, and it seems like a good time to do actual research on the topics I'm interested in so that I don't waste my time.<issue_comment>username_1: Research (at least in the organized form taking place in academia) is a discussion taking place between various individuals. If you have a hard time assessing papers, the natural thing to do is to see what people interested in the same subject think.
Published papers (in reputable venues) have passed some form peer review, so that is a good sign. Sometimes the reviews are available and can point to potential problems or contentiuous aspects. Often, though, the reviews are not public. Then, the best way to find out what the community thinks is to find papers that cite the one you are interested in and see what they say about it. (I find it useful exercise to compare these writings with what I initially thought about a paper, I see it as a way to train myself to detect problematic points that I may have missed.) In particular, look out for comments on whether the findings [could be replicated](https://academia.stackexchange.com/a/170640/101067). Be aware though that [blunt criticism is rare](https://academia.stackexchange.com/questions/120532/why-dont-i-see-publications-criticising-other-publications), and you may have to read between the lines.
The obvious caveat here is: what about new papers, or papers which have not been cited a lot? In the first case you can wait, in the second case that may implicitly contain some information, too (not that a paper is incorrect, but maybe more that it isn't incredibly interesting to the field).
Upvotes: 2 <issue_comment>username_2: >
> some kind of method they teach in university
>
>
>
Replication of the research is the correct way to verify research.
The more popular and cheaper method is to ask someone who has prior knowledge of the subject if the research looks correct. This is less reliable, but often good enough.
Upvotes: 2
|
2021/07/01
| 583
| 2,492
|
<issue_start>username_0: This is my first research and I don't know how many references I can use. So far I have collected 14. Is this enough? My research is about physics.<issue_comment>username_1: Look at how many citations other publications have in your target journal or field. How many citations do your 14 references have? There is no hard and fast rule, but you should be able to defend that you searched the current literature and point out who has done what and other research that supports some of your claims. In engineering, we typically have ~30-50.
My guess, since you are asking, is that you need more.
Upvotes: 1 <issue_comment>username_2: How long is a piece of string?
The answer is that there is no limit. You should cite as many or as few other works (papers, books etc) as you have used and built on in your research. Adding pointless citations to get your reference count up to some arbitrary number is silly and clutters your work with useless information. Similarly, using or building on someone else's work and not giving them credit via a citation is wrong.
Cite as many other works as you need to.
Upvotes: 4 <issue_comment>username_3: You ask "is this enough?" almost as if there's a cut-off for the number of citations. There isn't. Just cite however many articles you need to so that you have actually referenced everything that you've used in your research.
Upvotes: 1 <issue_comment>username_4: Obviously, you should cite any article where you have used their material, and exclude any that you have not.
That said, one should consider that there are several reasons why it is helpful to cite a paper! These factors might alter how much time you might spend on a literature review and what directions you spend your effort.
* If you are publishing in journal X, it is often appropriate to have a citation to articles in journal X. This demonstrates to the editors interest in field and relevance to the existing literature.
* Citing a paper means it is marginally more likely you will be read by its authors. If there are particular authors in your subfield of 10 or even a 100 researchers for whom your work is relevant or helpful to, a citation helps make them aware that you exist.
Again, you should not cite articles that you are not using.
Upvotes: -1 <issue_comment>username_5: People like to add references to make the paper look more informed. An advisor I had kept a huge list of references he added to every paper remotely related.
Upvotes: 0
|
2021/07/01
| 855
| 3,564
|
<issue_start>username_0: I wonder to what extent [technology transfers](https://en.wikipedia.org/wiki/Technology_transfer) are weighted when reviewing a professor's tenure application in a US university. E.g. in the case that a professor collaborated, possibly via one of their PhD students, to a technology transfer performed within some private enterprise, e.g. deplying an algorithm developed/advised by/etc. by the professor and their PhD student into a product/program of this entreprise.<issue_comment>username_1: Like a lot of things in academia, this depends on the field, the university, and the individual professor. It can be very important in, say, medicine or engineering, but less so in others. Some universities (Stanford, for example) built quite a reputation for this and fostered the development of important technology and its spread. This is why "Silicon Valley" exists, actually.
If you want to make it count in your own application, of course, you need to make the case for it. But that might be like anything else.
If you mean applying for actual tenure (rather than for a TT position), then you need to convince your colleagues who will likely have a say in your application, as well as your Dean. This will be more natural some places than others. If you are an outlier, then it probably has less effect than if others do similar things.
Upvotes: 2 <issue_comment>username_2: To expand on @username_1's question:
In many ways, each tenure and promotion case is different from all others. Professors are evaluated with regards to many criteria:
* Number of papers
* Grants received
* Patents
* Tech transfer
* Quality of teaching
* Number of graduate students and how they did
* Service in the department
* Service in the university
* Outside service
* Conference organization
* Mentorship of graduate students, postdocs, and younger colleagues
* ...
How each of these is weighted depends on the university, department, and the field you're in, but the key point I want to make is this: Most cases are not "obvious" in the sense that the candidate is excellent in all of these areas. Everyone has their own foci and most candidates are good in some areas and maybe not so rounded in others. The challenge in deciding whether there is one or a few areas where a candidate is really good, and good enough in the others.
What that one area (or these few areas) are will be different from candidate to candidate, and is something that as a candidate is worth working out for yourself. In essence, you'll have to make the case why you're worth getting tenure or getting promoted, and that's why (i) talking to senior colleagues, (ii) your research, teaching, and service statements matter.
In other words, tech transfer *can* be an important piece of the puzzle if it fits into the areas of strength of a candidate, and I could imagine it being a key criterion a department takes into account even in areas where tech transfer is uncommon. On the other hand, if you don't have a lot going on in research, but you're a good teacher, and then there's the one-time $10k fee a company paid to use something you came up with, then the latter is probably not going to carry much weight: It looks like a one-time thing, and people will not consider it as a big deal. A part of the consideration here is that giving someone tenure isn't just a judgment on a candidate's *past* work, but just as much about the *future prospects* of that person, and something that looked like a one-time thing just isn't going to give much of a boost.
Upvotes: 2
|
2021/07/01
| 3,868
| 16,349
|
<issue_start>username_0: (Details fictionalized.) My research involves a new way of forecasting the weather. Unlike other forecasting models, the model I have developed says that the weather in a certain region will be amenable to farming next year. Before I post the preprint of my paper, I am considering "betting" on my model by investing in some agricultural stocks in the region in question.
**Would placing a bet on my research have a negative effect on my ability to publish down the line?**
What other ethical and legal issues should a researcher consider before making a bet like this?
Are there historical examples of scholars making bets like this, and how did it play out?
Additional details:
* I recognize that being financially staked in the correctness of my research yields a conflict of interest, and I would report this when posting the preprint and seeking publication.
* This question concerns the consequences of such a bet for *me as a researcher,* not whether placing such a bet is a wise financial decision. In this hypothetical scenario, I would not invest more money than I am willing to lose. However, if the size of the bet changes the ethics, then you may note this in your answer.
---
Edit: I greatly appreciate the interest this question has attracted, but I haven't selected an answer yet because most of the answers so far (with a few exceptions, which I have gratefully +1ed) have focused on the question of *whether* there is a conflict of interest rather than on the question in bold, which is what the *consequences* of the conflict of interest would have for my ability to publish.
The ethical debate can be summarized as follows: "There is nothing wrong with betting on yourself being right; if anything it lends you credibility" vs. "Stock prices are themselves indicators of beliefs rather than morality, so the situation described still creates a conflict of interest." As commenters have pointed out, this debate ultimately turns on the details of the circumstance, the nature of the financial products, and so on—details I have not provided. However, **for the purposes of this SE question, the fact that this bet creates a conflict of interest is *given.*** That is, *assume* that if my research is widely accepted, I stand to make a (very small) profit, regardless of whether the particular predictions pan out. How would, for example, a paper referee react to such a conflict of interest?
I concede that this may be a more banal question, but it also keeps us more safely within the margins of academia.SE's "no opinion questions" rule. Possible answers could include
* Everyone pursuing an academic career is, to one extent or another, "betting on their research," and throwing a stock purchase into the mix doesn't rise above the threshold of suspicion.
* In principle, monetary instruments can create conflicts of interest, but there are ways to clarify to the referees (how?) that you are betting a small amount of money, and doing so in a way that is maximally correlated to the correctness of the results rather than people's expectations, which could be swayed by the impact of the research. As long as you make this clarification, then it's no big deal.
* Even if there is not an ethical issue in the philosophical sense, any sort of monetary bet creates the *appearance* of a conflict of interest, and for that reason it will be harder to publish this paper in a high-quality journal.
In my own field (which, as some of you have figured out, has nothing to do with the climate or farming), I have seen a paper published in a peer-reviewed journal that argued for the efficacy of a certain sampling methodology, and then included on the final page a disclosure saying that the study was funded by a company whose primary revenue source is performing the very same sampling methodology. In this case, it was very easy to draw an arrow from the funding source to the research outcome, but the paper was transparent and well argued. In a situation like this, does the paper ultimately get published because it's *good enough to overlook the conflict of interest,* or because *the conflict of interest was commonplace enough not to warrant additional attention*?<issue_comment>username_1: From my perspective, the conflict of interest arises if you go public with your research before the time period in question passes. In this case, if the public believes that your prediction is correct, that information may influence the market of the underlying agricultural stocks. You might then sell the now inflated stocks to make a profit. If the information you make public is materially false, that might be considered [market manipulation](https://repository.law.umich.edu/cgi/viewcontent.cgi?article=2978&context=articles) which is against multiple federal regulations in the US (insert disclaimer about not a lawyer).
If you (very correctly) disclose this conflict, an editor might take this into consideration when making a decision. Conversely, research conducted by commercial entities with direct financial conflicts of interests are published in high impact journals all the time. Disclosing the conflict adequately is the key.
However, if the investment was made historically, and the time period predicted is in the past, I am not sure there is a conflict.
Also note that many institutions (including the [National Institutes of Health](https://grants.nih.gov/grants/policy/coi/index.htm)) have guidelines on reporting ownership of particular securities.
Upvotes: 4 <issue_comment>username_2: >
> Are there historical examples of scholars making bets like this, and how did it play out?
>
>
>
Although not entirely analogous to your situation, there are some well-known cases where academics have made bets on outcomes that relate to their research work and academic hypotheses. Usually this happens when two academic antagonists get pissed off with each other's contrary pontifications, until one finally says to the other, "Care to put your money where your mouth is?"
A famous example of this is the [Simon-Ehrlich wager](https://en.wikipedia.org/wiki/Simon%E2%80%93Ehrlich_wager) between the economist <NAME> and the biologist <NAME>. Prior to the bet, Ehrlich had long argued that growing human population would lead to resource scarcity and other global catastrophes, whereas Simon had argued that resources would generally become more plentiful and cheaper. Simon challenged Ehrlich to choose any raw material and they would observe the inflation-adjusted price on an agreed date more than a year away. The pair ultimately agreed to bet on the inflation-adjusted prices of copper, chromium, nickel, tin, and tungsten, at an end date approximately a decade from the initial bet. All five materials declined in real price over that time and so Simon won the best convincingly.
Upvotes: 3 <issue_comment>username_3: What you’re describing is actually not that unusual. There is a [sort of tradition](https://en.wikipedia.org/wiki/Scientific_wager) among physicists and mathematicians to make bets on whether a research prediction they are making will turn out to be true. These bets are usually made with colleagues who hold a conflicting opinion. The stakes are usually fairly low (a [bottle of wine](https://www.quantamagazine.org/betting-on-the-future-of-quantum-gravity-20140314/), meal at an expensive restaurant, [etc](https://www.nytimes.com/1998/08/25/science/putting-money-where-their-minds-are-where-scientists-gather-wagering-flourishes.html)) but sometimes involve large amounts of money, for example $10,000 in a [very recent example](https://youtu.be/yCsgoLc_fzI).
In terms of the ethics, I don’t see a problem (and neither does anybody else as far as I can tell) with betting that your prediction will turn out to be true — after all, even those of us who don’t make explicit bets are effectively invested in the outcome of our predictions. If the predictions work out we gain reputation, prestige etc, which can lead to job promotions, awards, and other material rewards. But the incentives are aligned correctly: we only get the rewards if we make a *correct* prediction, and the scientific community actually *wants* researchers to try as hard as they can to make correct predictions. So despite what some people here are suggesting, this situation is *not* a conflict of interest (at least in a generic example, and in your particular example; perhaps someone can come up with a special case where there would indeed be a conflict).
Now, on the other hand, if you were involved in some weird scheme to bet *against* your prediction turning out to be true, then that would be a genuine conflict; this is analogous to various scandals that occurred in professional sports where players made bets against their own teams. From some googling I did it seems that the major professional sports leagues explicitly forbid their players from betting against their own team, and with good reason. So don’t do the scientific analogue of this practice.
Upvotes: 5 <issue_comment>username_4: >
> Would placing a bet on my research have a negative effect on my ability to publish down the line?
>
>
>
You are required to declare conflicts of interest, but it's improbable that a bet would negatively affect your ability to publish. It would make the others examine your paper more closely, but if your work is sound it should still be publishable.
>
> Are there historical examples of scholars making bets like this, and how did it play out?
>
>
>
Sure, see [<NAME>inger](https://en.wikipedia.org/wiki/Timothy_A._Springer#Business_career), a Harvard biochemist who became a billionaire investing in biotechnology stocks. His lab is still [publishing a lot of papers](https://timothyspringer.org/publications), as well.
Upvotes: 3 <issue_comment>username_5: **Talk to a Hedge Fund**
Why make a small bet? If you really have a meaningfully better way of predicting the future (weather) then there are companies that will be willing to make very large bets, although they'll probably prefer to call them investments. This is what many of these firms do day in day out. There need be nothing inherently unethical about it as long as you disclose your interests where appropriate in the publication process, but then you may decide you don't want to publish... which raises a different set of ethical issues. If all goes well you'll be able to ponder them from your Yacht.
Upvotes: 1 <issue_comment>username_6: If your paper demonstrates a way to predict company bankruptcies for example, and you short companies which your technique predicts to go bankrupt prior to your paper being published, then yes I think that would be classified as unethical. Specifically if you were intending to close the short on any subsequent dip in prices (as other people presumably flocked to short based on the new technique), rather than waiting for a bankruptcy.
Upvotes: 0 <issue_comment>username_7: You've done your research. You're not placing a bet on *it*. You're placing a bet on some stock or commodity performance.
In research, the ethics comes down to conflicts -- reported or unreported. If you design a drug, say, and have IP rights to that drug, readers have reason to suspect that your financial interest creates a situation where you might highlight good findings and suppress poor findings. Thus, you would need to declare your conflict and manage it. In this case, you may be blinded from your own findings, or an independent monitor of your data may need to do any analysis before you do. Readers would be made aware of the conflict and mitigating strategy.
Here, it doesn't feel like you would be creating a conflict. You're not trying to generate cherry-picked data for your model. You're trying to make money. If you, however, were going to write about your model, and you wanted to publish your experience to make your model more valuable for licensing purposes, you may well be creating a conflict.
You might discuss this with your chair (I'm not a lawyer), and see if you need to update any of your conflict reporting forms.
Upvotes: 1 <issue_comment>username_8: I think some (but not all) of the answers here are over-interpreting your use of the word "bet" to focus on some more trivial cases: the sorts of bets one might have among friendly rivals of different sports teams, for example. Petty cash, bottles of liquor, risking gastrointestinal distress by drinking something one wouldn't normally.
However, the question as asked is not about these **trivial bets** but about having a *financial stake in the matter*. This changes things into a potential *conflict-of-interest* scenario. The key in this scenario is *disclosure*: you'll need to reveal your position (possibly even if it is a future planned position, though that might spare you from some legal if not ethical rules).
Yes, these conflicts may impact whether a journal wants to publish your paper as well as how others view it. Of course, if you turn out to be correct, then it will be hard for others to see the research too negatively. On the other hand, if you turn out wrong, you might look bad beyond the cost to your pocket: it may appear, whether intended or not, that the paper was just a ploy to influence the investment patterns of others to your benefit. In other words, consider that you may be making more than just your financial bet.
These issues come up in the pharmaceutical literature all the time, though more often in the form of direct payments (unless the authors are actually employees of the company). I don't think there is anything inherently wrong in papers being funded by a company with an interest in the results (and I think there is scientific benefit to having those papers released *publicly* rather than being used only for internal decision making, or selective release of only positive results), but when some product has a bunch of mixed literature from the broader scientific community and only rosy publications from those with a financial stake, well, it doesn't look that great: for the company, product, or authors.
However, peer review should always be focused on a specific submission, not the past. Dabbling a bit in the sort of thing you discuss doesn't seem like it would be harmful long-term; building a reputation as a charlatan is another matter.
Upvotes: 4 [selected_answer]<issue_comment>username_9: I think one additional point of view, not taken in the previous answers, is to start from a definition of a conflict-of-interest. There are of course many definitions, but this one seems to appear in slightly different forms in Elsevier's journals:
>
> When an investigator, author, editor, or reviewer has a financial/personal interest or belief that could affect his/her objectivity, or inappropriately influence his/her actions, a potential conflict of interest exists.
>
>
>
Now, first, does this betting influence your objectivity? Most probably not. There is no incentive for you to be wrong, and the betting takes place *after* you have performed rigorous research.
Second, do you have a personal interest that affects your actions? The answer to this questions seems to be *yes*. However, are these actions *inappropriate*? For more context, *inappropriate* would here usually mean having some kind of dual commitment:
>
> Conflict of interest exists when an author (or the author’s institution), reviewer, or editor has financial or personal relationships that inappropriately influence (bias) his or her actions (such relationships are also known as "dual commitments", "competing interests", or "competing loyalties").
>
>
>
This should clear up the issue. Whatever happens during peer-review / post-publication, you must first be loyal to the research. For example, you must not insist on not making changes the peer-reviewers or editors ask, if the reason is that it means you are suffering monetary losses.
Thus, I would say this does influence the publishing process. More often that not manuscripts are heavily edited after being first submitted for publication; the editor and reviewers will be unsure whether you can be trusted in making requested changes. Declaring the conflict-of-interest of course does give you +1 but it is still easy to go below zero.
Upvotes: 1
|
2021/07/02
| 822
| 3,509
|
<issue_start>username_0: If a PhD student were to transfer universities after completing
1. Course requirements
2. Qualifying exams
but not yet started a dissertation, could the student start at a new university with ABD status?
Related, would it matter if the exam committee remained the same but the university affiliation was the only difference?
Notes:
* There are several questions asking about transferring universities after failing qualifying exams, but this one assumes all results have gone well.
* Unlike [this question](https://academia.stackexchange.com/questions/151144/would-leaving-my-current-phd-program-abd-be-a-red-flag-when-applying-to-other-un), the goal would not be to start from scratch, but rather to write a dissertation at a different university.
* [This question](https://academia.stackexchange.com/questions/59558/leaving-phd-in-the-6th-year-and-apply-for-a-new-phd-program) is similar but also discusses ABD status as a negative attribute, which is not the intent in this question.<issue_comment>username_1: It is sometimes possible to transfer between doctoral programs and get credit for all the coursework you have completed and for having already passed the Ph. D. qualifying exam. However, I think the odds are probably against it. Getting credit for some of the coursework is generally possible, although many departments place limits on the number of graduate-level credits that can be transferred from another university, and many will also insist that Ph. D. transfer students take the qualifying exam, regardless of whether they had passed it at their original institution.
In my department, we have only given a transferring student student credit for passing the qualifying exam once, and that was in a fairly unusual situation. Our department had hired an endowed chair, and the new chair wanted to bring along her current graduate students, one of whom was quite advanced—finished with coursework and the qualifying exam. The chair wanted the advanced student to get credit for everything she had already done, so all that she would need to complete in our department was her dissertation. In response to the request, our director of graduate studies scrutinized the student's course record and decided that it was satisfactory. Then he and I (who am chair of the committee that oversees the qualifying exam in our department) together looked at the parameters of the qualifying exam in the student's original program, and we determined that it was very similar to the exam in our own department. On the basis of these investigations, we decided to give the student credit for everything she had done in her old program, as her advisor has requested.
So it is possible for this to happen. However, for a student without a powerful sponsor (a newly hired endowed chair is a pretty important person in the department), we probably would not have even considered making a special exception, and we would have required the student to retake a certain number of advanced classes and to take our departmental qualifying exam. Other departments might be more accommodating, but I still think the odds would definitely be against you.
Upvotes: 3 <issue_comment>username_2: As a doctoral supervisor I could say that this is difficult for all involved. The transfer will depend on (1) whether it is a USA institution, or (2) British university, and (3) EU University, or (4) Other. The ultimate answer to your question lies with the respective institution.
Upvotes: 0
|
2021/07/02
| 618
| 2,477
|
<issue_start>username_0: The question is motivated by the fact, that [Physical Review Research](https://journals.aps.org/prresearch/) still did not get its first impact factor and a date it happens given on their internet page:
"We are hopeful that Physical Review Research will be fully indexed soon and will receive its first Impact Factor in June 2021"
was changed to [June 2022](https://journals.aps.org/metrics).<issue_comment>username_1: I don't think anything has been "suspended." *Physical Review Research* is still listed in the Emerging Sources Citation Index (ESCI), which is where most or all new journals start nowadays. Journals listed in ESCI don't receive an impact factor. Only when the journal is transferred to the Science Citation Index Expanded can it receive an impact factor.
Upvotes: 2 [selected_answer]<issue_comment>username_2: This one is fairly obvious. The journal wasn't good enough to be indexed by Clarivate (the company that manages the Science Citation Index and calculates the impact factor) in 2021, hence the journal is hoping to be indexed in 2022 when Clarivate do next year's assessment.
For clarity, [a lot of things are needed before one can get indexed by Clarivate](https://clarivate.com/webofsciencegroup/wp-content/uploads/sites/2/2019/08/Journal-Evaluation.pdf). There are over 25 criteria. Some of them are not obvious, such as:
>
> The authors must have affiliations, geographic diversity, and publication records that validate their participation in the scholarly community associated with the stated scope of the journal. The demographic of the contributing authors should be consistent with the topical and geographic characteristics of the Editorial Board.
>
>
>
Other criteria are practical:
>
> Cited references, names, and affiliations must be published in Roman script to
> allow rapid, accurate indexing, and easy comprehension by our global users.
>
>
>
And some can cause serious headaches for new publishers:
>
> The journal must provide a readily accessible, clear statement of the commitment
> to peer-review and/or editorial oversight of all published content. Primary
> research articles must be subject to external peer review.
>
>
>
It's improbable APS will screw up the latter two of these examples because they've have done them before for other journals, but the first is not something that's within their direct control, and that could have caused the application to fail.
Upvotes: 2
|
2021/07/02
| 862
| 3,625
|
<issue_start>username_0: Hello dear academic community,
**I am having a lot of confusion regarding what it is that determines your PhD field**. As far as I know, PhD is an academic title awarded mainly in the humanities, and the way I understand it a person who writes a dissertation on Literature and some who writes one on History, will both be “PhD”.
My question comes from the fact that I was offered a PhD research position in a project that takes place in the Faculty of Linguistics at a university. This is an interdisciplinary research project which encompasses Linguistics but also the Social Sciences such as Geography and Anthropology. The project director is a Linguist, and she offered me the PhD position in her department, however I do not have any formation in Linguistics, but rather in Sociology and Anthropology. My research proposal would therefore relate to language and its social and political dimensions, and it would be written from an anthropological perspective, since I feel I am completely unable to make any kind of linguistic analysis.
Now, when people ask me in which field I would be doing my PhD, I kind of do not know how to answer. **What determines your PhD field? Is it the department? The supervisor’s research field? Or the topic?** I usually say it would be in Anthropology since I am planning an anthropological project. However, I do not know if this makes any sense at all. **Does it make sense that I write an anthropological dissertation in a Linguistics department?** **Would that make me a PhD in Anthropology?**
I hope someone can help me.<issue_comment>username_1: Your PhD degree certificate itself will probably give the formal definition of your field. For example, mine says PhD in cosmology. But, if I was talking to someone who didn't know much about the different areas of physics, I might just say I did my PhD in physics. If I was talking to a cosmologist, I'd say I worked on dark energy, and specifically testing interacting dark energy models.
In your case, I think it's fine for you to tell people that you're working on an anthropology project in the linguistics department. I can't think of any context where it really matters one way or another the exact wording on your degree, or which department you did it in. If you're applying for an anthropology job, just make it clear that your thesis has an anthropology focus, despite your research being done in the linguistics department. In fact, to me this seems more interesting and will presumably open you up to far more different ideas and research opportunities than if you were working in a pure anthropology department.
Upvotes: 2 <issue_comment>username_2: It is highly dependent on location, but what it says on the certificate usually somewhat depends on what organisatorial sub-unit of the university is responsible for granting you the degree, usually that is either a faculty (e.g. humanities) or a department (e.g. linguistics). In general they will have some regulations that allow them to hand out degrees in a certain list of names. E.g. neither will be able to grant you a PhD in engineering.
But if it is up to the faculty of humanities, they might be able to hand you a PhD in anthropology even if you work for the department of linguistics. On the other hand, if it is up to that department, they might not be able to.
And in any case what it says on the certificate is mostly meaningless. If you continue in academic jobs, people will be more interested in your actual research and if you leave for something different, the precise nature of you PhD will no longer matter.
Upvotes: 1
|
2021/07/02
| 1,052
| 4,477
|
<issue_start>username_0: I am a PhD student in the final year of my PhD who has been struggling with creating new connections in my field, especially due to the current Covid situation. Conferences are only taking place online and nobody of relevance really sticks around for longer than their own talks. Also, my advisor unfortunately does not have that many contacts in my field to begin with.
Recently, I found out there is another researcher in the same field as me, some years my senior, and who comes from the same country as me (Brazil). As it so happens, we are I suppose the only two people from there who currently work on this field (but I am sure he is not aware of my "existence" just yet).
Since I also find his research interesting, I thought about cold-emailing him with some form of collaboration or even a postdoc in mind. There is already a myriad of advice on this site in that sense, of course; I was wondering how the common background may affect the best strategy to do so. Perhaps I could use it to make my email stand out from the rest? Still, my fear is that this might be seen as "lame" and that I might end up being perceived as only looking for "favors" on the basis of being from the same country as him.
**Question:** Can I somehow use the common background to my advantage? Or do I risk shooting myself in the foot by doing so? Or am I just overthinking all of this and the main focus should be on the research anyway (where the common background is nothing but a nice triviality)?<issue_comment>username_1: If you want to email him simply to establish an open-ended connection, I don't know how likely it is that you'll get a reply. If you're interested in doing some research under him, consider if you will be able to spare the time to do so, given that you're already pursuing a PhD. Maybe keep your advisor in the loop about all this, though I am not sure how he/she will perceive this.
If you DO want to conduct research under the other researcher and don't see any problems, then yeah, by all means, go ahead and email him. I think you can be honest about your reasons for emailing him: that networking with researchers from other countries has gotten much more difficult due to Covid-19, that you are interested in his research and that the common background makes this a more attractive proposition to you in general. Maybe mention what exactly you intend to accomplish working with him.
I don't think mentioning the common background will be a disadvantage by any means. I certainly don't think that it will be seen as "lame". I also don't see why he would think you're looking only for favors, as long as you talk primarily about his research and your work in the same field. You will not be shooting yourself in the foot. Ideally the main focus WILL be on the research, and the common background will be nothing but a nice triviality.
Upvotes: 2 <issue_comment>username_2: I think if you *use a common background to build rapport* you're good to go. There's no guarantee it'll increase your chances of a response, but it might pique a little interest. I see no harm in it. I'd expect it might be more effective if you had come from a smaller country (or restricted region in a larger one), but in a narrow enough field it might be notable enough.
If you think of it as an *expectation for assistance* then you're on shakier ground, and the insinuation is even likely to offend someone. It seems you've already got this concern covered, though, so I think that will prevent you from saying anything too off-putting.
Cold emails are generally low-success ventures, but there are lots of advice Q&A on this stack about them. Be brief and direct, don't be overly persistent or upset if you don't get a response. The shared background can be added as a brief half-sentence to one-sentence note. You ask:
>
> Or am I just overthinking all of this and the main focus should be on the research anyway (where the common background is nothing but a nice triviality)?
>
>
>
Yes, I think that's the best way to phrase it - the background is a nice triviality. But sometimes social and professional relationships can build from trivialities, especially shared experience. Have a focused research/science goal for your message, not just "Hi I am also from Brazil!" Think more specifically about what your goals are, like @henning asked in a comment. Even "collaboration or postdoc" may be too broad/vague.
Upvotes: 4 [selected_answer]
|
2021/07/02
| 724
| 2,957
|
<issue_start>username_0: I am planning to go to one of the U.S. universities for sabbatical. (I do not live in U.S.) Most of my salary will be paid by my university, but my host university will pay for my apartment in the U.S. and provide health insurance for me and my family.
When I asked people I am going to visit what they expect me to do during the sabbatical, I was told that I am expected to regularly attend a specific seminar.
**Is it generally ok if, in addition, I also regularly attend one more seminar, which is organized by another group of people (who do not pay me)? Would it create any conflict of interests? The second seminar is not connected directly to my field, but I am just curious about it.**<issue_comment>username_1: I don't think this would ever raise an ethical concern. Your stipends and such aren't provided to make you think in a particular way even if you are funded to provide some assistance to a particular group. Using a sabbatical to expand your reach is an excellent use of the opportunity. Have a bit of fun while you are at it also. Nothing wrong with that.
Do what is expected of you, but don't limit yourself beyond that. Sabbaticals would be pretty useless if they didn't permit professors to expand their professional horizons.
One thing to watch out for, though, is that some departments can be incredibly political (office politics and beyond), with a lot of ill feeling between groups. Don't get caught in the middle of any such things. I doubt that this is a universal, but I've seen it happen. Newcomers don't see all the signs.
Upvotes: 4 <issue_comment>username_2: Your home institution has a reason for paying most of your salary while you travel elsewhere and don't have your usual teaching duties. I would expect that part (or maybe all) of that reason is to enable you to expand your knowledge and interests. So think of that second seminar as the "work" for which they pay your salary.
Upvotes: 2 <issue_comment>username_3: They’re just setting a minimum bar for what it means for you to be visiting them. You’re free to do any other things you want like anyone else in the department. They just want to make sure that you’re actually interacting with the department and not say actually just traveling around the US the whole time.
Upvotes: 3 <issue_comment>username_4: The goal of a sabbatical is to expose yourself to new ideas, grow as a researcher, and contribute from your knowledge to the local community you are visiting. The goal of a seminar is to spread knowledge and foster collaboration. So go, have fun, and attend as many seminars as you like, this is 100% consistent with the intended purpose of sabbaticals and seminars. Anyone who doesn’t want you in their seminar or going to someone else’s seminar has an extremely misguided approach to research and to academic life. For the record, I’ve never in my life met a person in mathematics who holds such strange views.
Upvotes: 3
|
2021/07/02
| 446
| 1,852
|
<issue_start>username_0: This is just for an example: suppose, someone finishes a Ph.D. in Biology. Now, **for some reason** (maybe a hobby, to invent something new, etc), he wants to study Electrical Eng. So, he gets admission into an MSc in Electrical Eng program.
Is it normal in the USA/UK?<issue_comment>username_1: It's not normal, but it does (rarely) happen in both countries.
Upvotes: 2 <issue_comment>username_2: (US perspective to this answer)
No, it's not normal.
However, it's also not ridiculous or anything. Unlike PhD degrees, which principally prepare you for research, MSc programs are often more like professional programs.
It's weird to do a second PhD (even though many people seem to ask about it on this site), because most of what you learn in one PhD ("how to participate in academic research") is transferrable to other areas, and PhD positions are often funded by governments and universities to train another generation of scientists. Those funders don't have in their mission statements to train and retrain and retrain people who can't decide what field they want to be in.
I think the scenario where you'd most likely see someone do a MSc after a PhD would be to do a career shift to a field in which there are *industry positions*. For example, a biology PhD might do a masters in statistics/data science or engineering/computer science if those degrees would help them get a job in those fields. It is not a path someone would go down on the academic career path.
A MSc does not seem like a good path for someone interested in a hobby project - masters programs do not typically come with funding, and are quite expensive. There may be continuing education opportunities to let someone take *individual courses* towards a hobby project for a reasonable fee rather than enroll in a full degree program.
Upvotes: 2
|
2021/07/03
| 1,793
| 7,666
|
<issue_start>username_0: Lately, I have given a couple of interviews for Integrated Ph.D. programs in Physical Science. On some I was selected, while on others I was rejected. But I find only trifling differences between these interviews. Whenever I gave answers, they'd say, *good* and so on. I gave all of the questions asked a similar response. I don't quite understand how they actually selected the candidate then?
The ones where I'm not selected are the important ones for me, I have another chance and I don't want a similar case as before. If they give a similar response, I have no way of knowing what's going on?
So is it all right if I mail them and ask why they didn't select me? Is it appropriate to ask something like that? I just want to improve my chances, that's all.<issue_comment>username_1: As mentioned in [this answer,](https://academia.stackexchange.com/a/105034/13240) you can say:
>
> Do you have any suggestions for how I could be a stronger candidate in the future?
>
>
>
I think it is unlikely you will get a useful answer, but you might.
Upvotes: 5 <issue_comment>username_2: You should not ask. It is extremely unlikely to produce anything useful, but will probably make the interviewer uncomfortable. Instead, try to arrange for a mock interview with a mentor who is familiar with your background, and get feedback from them.
Many potential weaknesses would require long-term actions for improvement or are even unchangeable, and thus feedback there isn't going to be actually actionable for you. Better grades in your previous studies, more research experience, etc.
The most useful information for you would be situations where you've failed to bring your existing strengths across. Learning of this is immediately actionable for your next interview. But an interviewer cannot identify these situations, because by definition they remain unaware of the relevant strength. Keep in mind that PhD admission interviewers are trying to determine whether you are a good fit for their PhD programme, not whether you interview well. So if they can isolate a pure interview mistake, they'll probably attempt to correct for that anyway.
Besides the issue that discussing someones failure is awkward anyway, interviewers might also be worried that you want to hear the reasons for rejection not in order to improve yourself, but in order to either argue with them about the decision or to appeal it. Given this risk, and given the difficulties in providing useful feedback, I suppose that you'll hear some variation of "Unfortunate we cannot provide detailed feedback to unsuccessful candidates. We wish you all the best in your future endeavours" if you ask.
(I would expect my answer to be broadly applicable, but my concrete experience with this is UK-based.)
Upvotes: 5 <issue_comment>username_3: You can ask, but I doubt that you will learn anything that would make you a better candidate in future. The reason is that there are probably a limited number of slots and some competition for all of them, especially at the margins. My best guess is that they just thought more highly of someone else. They might say that much, but would probably be restrained from saying anything more. In particular, they won't discuss any comparisons with the person(s) who end up being accepted.
Very few institutions can accept all applicants and you probably wouldn't want to go there if they did.
The differences between candidates can be very subtle, and depend on the individual views and preferences of interviewers and others in the process. So, it isn't that you are "bad" in some way, probably quite good or you wouldn't get an interview at all. But someone else was judged better, perhaps on somewhat intangible criteria. In some places the competition is very steep.
Upvotes: 3 <issue_comment>username_4: It is quite OK to ask for feedback after an unsuccessful interview. Ask the lead academic who asked the questions - not the HR administration. Yes, academics are overworked and get too many emails, but most of them recognise that if they've turned someone down for an important position they at least owe the unsuccessful candidate a couple of minutes to write a quick email.
The answer may well be unhelpful:"You were OK but another candidate was better" or "We were looking for someone with a deeper theoretical background" (or perhaps "a more practical background"). But there is a small but nonzero chance of getting some useful information, about your CV or your interview style, and if that happens it will be really valuable.
What you must absolutely not do is argue with their advice if they give it. It won't be pleasant and it may seem to you that they've misjudged you ("How can they say I don't have a theoretical background when I got 95% on
QM345! Didn't they read my CV?") Don't argue, just thank them and move on.
Upvotes: 4 <issue_comment>username_5: Other people have given some great answers. I know which place you got rejected from and why it matters a lot. I also got rejected from there and I would like to tell you certain things I have found out since then:
1. The interview committee you get matters more than you think. I have heard that very experienced professors may underestimate the knowledge of their undergrad interviewees, and their grade tends to be inflated. Young professors on the other hand make things too stringent sometimes. Let me give you an example: I talked to a friend who got selected and (s)he gave me a list of questions (s)he had been asked:
a) What is the partition function?
b) Calculate mean energy of this system from the partition function. Also calculate Cv
c) Random walk problem in *one* dimension.
d) Solve the SE for a particle in a bound state of a finite potential barrier, what is transmission probability. e) Problem on blackbody radiation considering Stefan's law.
I forget the rest.
What was asked of me?
a) Where do electrons reside in any element, say silicon? b) If I take a piece of silicon and shed light of a constant intensity on it with a potential applied around it, draw graph of frequency vs current. c) (Dis)Prove that there will always be a bound state in a potential well of arbitrary shape. d) Draw the ground state wavefunction for an arbitrary potential well. e) Why do we keep using the momentum eigenfunctions in calculations when they are non-normalizable. (The answer was *not* just that for simplicity, a wavepacket with a good spike can be approximated to be a plane wave) e) Question completely unrelated to physics, related only to logic, mathematics and calculus, which took me around 3-4 minutes to grasp completely.
Now I could be totally biased, and I am sure to a certain degree I am. But I think the second set of questions was considerably harder than the first one. In no way do I claim that my knowledge is more/less than my friend. That is irrelevant. But interview committee really matters.
2. Your score in the qualifying exam carries a weightage. Even if you were the best during the interviews, a low score in the qualifying exam hampers your chances of acceptance.
3. Online interviews, personally for me can turn out counterproductive. I was solving things on shared screen, and most people were solving things on pen and paper and only showing results. If you get stuck on a shared screen, that has a way of sending a bad message to the committee. Stuck on paper, who knows?
I hope this answer helped you.
Of course there is the fact that you and I were just not competent enough for the place, but that is not the *only* thing we should be thinking of while analyzing rejections in academia.
Upvotes: 1
|
2021/07/03
| 1,192
| 3,918
|
<issue_start>username_0: [This journal](https://www.resurchify.com/impact/details/14875) has an impact score and impact factor:
* Impact Score 5.94
* Impact Factor 6.714 (2019)
>
> The journal Impact Factor is the average number of times articles from
> the journal published in the past two years have been cited in the JCR
> year.
>
>
> The Impact Factor is calculated by dividing the number of citations in
> the JCR year by the total number of articles published in the two
> previous years. An Impact Factor of 1.0 means that, on average, the
> articles published one or two year ago have been cited one time. An
> Impact Factor of 2.5 means that, on average, the articles published
> one or two year ago have been cited two and a half times. Citing
> articles may be from the same journal; most citing articles are from
> different journals.
>
>
> For example, the journal PLoS Biology's 2010 impact factor is 12.472.
>
>
> This was calculated thusly:
>
>
> 5076 - total of all citations from 2010 articles to PLoS Biology
> articles published in 2009 (1971) and 2008 (3105) divided by 407 -
> total of PLoS Biology articles published in 2009 (195) and 2008 (212)
> = 12.472
>
>
> The number by itself does not mean as much. If you knew that the
> journal with the highest impact factor has the number 94.333, you
> might think 12.472 was quite low. But when you look at the impact
> factors of all the Biology journals indexed by JCR, PLoS Biology is
> ranked No. 1 in the Biology subject category.
>
>
>
<https://guides.uflib.ufl.edu/c.php?g=147746&p=967441>
Everything I am finding keeps talking about impact factor. The info of [another journal](https://www.resurchify.com/impact/details/21100415502) makes it seem like they are the same, since only "impact score" is listed.
However, if both are listed like the first journal/link, which one should be used?<issue_comment>username_1: The [page you linked](https://www.resurchify.com/impact/details/14875) says it (almost) all:
>
> The impact score (IS), also denoted as Journal impact score (JIS), of an academic journal is a measure of the yearly average number of citations to recent articles published in that journal. It is based on Scopus data.
>
>
>
Both these measures are variations of "average citations received per article", but they are computed on a slightly different database and so they give different results. The Impact Factor is the most well known and recognized, but many alternatives have appeared, also since the IF database is paywalled and does not include some fringe journals.
Upvotes: 2 <issue_comment>username_2: >
> The impact factor (If), also denoted as Journal impact factor (JIF), of an academic journal is a measure of the yearly average number of citations to recent articles published in that journal. It is based on **Web of Science data**.
>
>
>
>
> The impact score (IS), also denoted as Journal impact score (JIS), of an academic journal is a measure of the yearly average number of citations to recent articles published in that journal. It is based on **Scopus data**.
>
>
>
<https://www.resurchify.com/impact/details/14875>
>
> CiteScore is another metric for measuring journal impact in **Scopus**. The calculation of CiteScore for the current year is based on the number of citations received by a journal in the latest 4 years (including the calculation year), divided by the number of documents published in the journal in those four years.
>
>
>
<https://libguides.lb.polyu.edu.hk/journalimpact/citescore#sthash.LxZdZLH3.dpbs>
Therefore, for lesser known journals that have both, it makes sense to put them in this form:
* Impact Score 7.14 (2019, Scopus data)
* Impact Factor 6.714 (2019, Web of Science data)
Then I just wonder if it's better to list the journal's impact factor for the most current year or from the year the article was published...
Upvotes: 0
|
2021/07/03
| 905
| 3,710
|
<issue_start>username_0: Research suggests that high-altitude training can have a positive impact on athletic (especially aerobic) performance. It is also known that high altitude (beyond a certain threshold value) is associated with a decrease in athletic ability and cognitive function. However, is there any evidence suggesting that doing research (or training to do research) at a high altitude could positively impact one's performance after a descend to a lower altitude? Has anyone tried to quantify the effects of the altitude on one's academic/research performance, especially in the range below what is normally considered a high altitude (~5000 ft or 1500 m)?
I am asking this question out of mere curiosity.<issue_comment>username_1: As the comments already indicate, you're not likely to get any useful answers to your question, in particular and also because there is no practical way of making use of the effect if it exists: Unlike sports where you can focus on one event, research is a year-round thing. You can't go to high altitude for a month and then come back for 2 weeks of intense research.
Empirically, however, Colorado hosts two very good research universities, both of which are located at around 5000 ft (1500 m). At least, it seems, the altitude does not negatively affect the cognitive abilities of those who work there. In fact, and that includes myself, many of us find happiness and relaxation at the much greater elevations of the 54 mountains in the state that exceed 14,000 ft (4267 m), where thin air is most definitely a thing. If you asked these people, they will probably say that their research productivity is far more affected by the happiness that comes from living in a beautiful part of the country than by whatever thin air we breathe.
Upvotes: 3 [selected_answer]<issue_comment>username_2: The effects of being at a high altitude can be simulated at low elevations using an [altitude tent](https://en.wikipedia.org/wiki/Altitude_tent). I believe this is used for training purposes by athletes. I once saw an interview with [<NAME>](https://en.wikipedia.org/wiki/David_Blaine), who explained that he slept in such a tent every night for six months as part of his preparation for his (at the time) world-record-breaking stunt of holding his breath for around 17 minutes.
The effects on the brain of doing research at a low elevation can also be very easily reproduced by breathing a mixture of gases with a higher concentration of oxygen, eg, [nitrox](https://en.wikipedia.org/wiki/Nitrox), or even pure oxygen, which would be the atmospheric air equivalent of a (hypothetical) elevation of many thousands of feet under sea level. During a sports event this is impossible to do, but for research, who would stop you if you wanted to do it?
From this I conclude that if breathing more/less oxygen was known to be “performance enhancing” for research and other mental activities, we would actually be hearing people talking about obtaining some oxygen for regular home/office use, and/or buying altitude tents. I have not heard such talk, but maybe that means that I just mingle with the wrong crowd and need to get out more. Related ideas [do seem to exist](https://en.wikipedia.org/wiki/Oxygen_bar), though they are pretty fringe and seem to appeal mostly to “alternative facts”-oriented people.
**Note: I do not endorse the breathing of exotic gas mixtures without the necessary technical and medical knowledge. Breathing pure oxygen for extended periods will cause [oxygen toxicity](https://en.wikipedia.org/wiki/Oxygen_toxicity) and is extremely harmful, and there are some safety issues with storing the oxygen and handling it safely.**
Upvotes: 1
|
2021/07/03
| 680
| 2,845
|
<issue_start>username_0: Our university has the option where a student can withdraw from a course within the first month from the commencement of the course. This would lead to 'W' grade and is generally not equated with a 'Fail' grade. I took a course in Artificial Intelligence (AI) Course and had to withdraw from it as I could not manage the workload due to several other courses which were somewhat difficult. I later decided to take a Machine Learning (ML) course next semester and have cleared that course. I have later taken few MOOCs too in this field, but only after graduating.
Now I wish to apply for an MS in CS preferably in Canada or US as an International Student. I wish to pursue AI/ML specialization or at least mention in my SOP that I have a strong inclination towards this field. Should I be explaining the course withdrawal which can be noticed in my transcripts?<issue_comment>username_1: No, you don't need to explain this as it's just a single course on your transcript. It's pretty common to have one W; it does not reflect badly on a student unless it's multiple courses over multiple semesters.
Upvotes: 1 <issue_comment>username_2: If this is the sort of thing you focus your SoP on you are making a big mistake - missing an opportunity. Your SoP should be about the future; your plans, goals, and how you intent to achieve them. Wasting words on 'splaining old stuff won't get you accepted. The CV and transcripts are about the past. The SoP is about the future. Only bring up the past for things that directly and emphatically support the future.
If asked about the W, be prepared to give an honest explanation, but things that sound like apologies for things you consider missteps needn't be brought up proactively. If you think you are qualified, say that somewhere, though the SoP has a different purpose.
Upvotes: 2 <issue_comment>username_3: No, it is not *necessary* to justify the W on your transcript; however, it could be *useful* to address it indirectly.
>
> I took a course in Artificial Intelligence (AI) Course and had to withdraw from it as I could not manage the workload due to several other courses which were somewhat difficult.
>
>
>
Self-assessing your limits is a valuable skill. You withdrew from one course in order to meet your learning goals in others, which is a responsible response to a high workload.
>
> I have later taken few MOOCs too in this field, but only after graduating.
>
>
>
This is the type of thing that you might consider emphasizing in your SoP, rather than dwelling on past negative experiences (especially if other areas of study are not present on a transcript).
In short, you should use your SoP to highlight your strengths and explain why shortcomings (or perceived shortcomings) of the past will not be an issue in your future studies.
Upvotes: 2
|
2021/07/03
| 3,154
| 13,338
|
<issue_start>username_0: In math/CS and related fields, it is almost unheard of to obtain a tenure track position at a top US university without doing at least one postdoc. However, on mathjobs, a large proportion of the postings are from Chinese universities, and a substantial subset of such postings seem to always be active. From my understanding, it seems that many of these correspond to relatively new universities which are looking to rapidly expand and therefore are continuously hiring (e.g. SUSTech).
Is a postdoc necessary for such positions? I know that many of these universities have some postdoc positions, but the academic hierarchy/typical trajectory for academics in China is a bit unclear to me. There have been some very useful threads on postdocs in China here in the past, so I was hoping that someone would be able to shed some light on my query. At the moments I'm planning to apppy to postdocs in the US and China, but wonder if I should add TT positions to my list (or whether that would be essentially futile like in the US).
Since I suspect it may be somewhat relevant given that these universities are looking to establish reputations, I'm at a US R1 and my department and neighboring departments have strong reputations in their fields.
EDIT: Just wanted to clarify a few points based on comments/responses:
* I'm mainly focused on research institutions, but more general response are also welcome as they might be of general interest now or down the line
* I think the increased availability of positions extends beyond just new universities in China. Many of the oldest and best universities also seem to have persistent ads. New universities such as SUSTech mentioned above have also become quite respected.
* In general, I'm not sure if my perception that Chinese universities have more ads is true, and even if it is, whether it's due to increased availability of positions or just different hiring practices.<issue_comment>username_1: You point out the difference in your question: You are comparing top universities in the US with new and/or regional universities in China. I don't actually have anything to offer regarding the specifics of Chinese universities (though I'd be excited to work at one sometime!) but small/new/regional universities in the US *also* hire faculty without requiring them to have gone through postdoctoral training.
Upvotes: 0 <issue_comment>username_2: For the record, I am no longer working in Chinese universities, and I'm not from the fields of math or CS. A few years ago, I was doing a tenure-track position in a university in one of the top 5 largest Chinese cities. I am merely writing this answer since no one else has come to the fore, but I am hoping that someone who has direct experience of doing a postdoc in China can come here to give more accurate info. All I can do is provide a rough outline and some words of advice.
In recent years, China has been implementing a new system of recruitment roughly known as the "pre-hired/long-term hired" system (预聘/长聘). Note however that depending on the university, you can see other variants of these names such as 准聘, 校聘, or 特聘, all of which have their own specificities. more information on these terms can be found in [this Chinese website](https://zhuanlan.zhihu.com/p/342032868) (use Google Translate, it should give you a good idea).
Basically speaking, "pre-hired" is the Chinese equivalent to being on the tenure-track system, and "long-term hired" means that one becomes officially tenured faculty.
Even though these recruitments are written in English, their main objective is to attract Chinese mainland students who graduated from top universities overseas, since they are likely to be more supportive of local politics and better-integrated into the country's culture. That said, you can apply and you will be certainly considered for a position.
The actual system of recruitment and its details vary somewhat from university to university, but for most universities there are usually two key periods throughout the year when the university is receiving CVs from Chinese and foreign candidates, and conducts interviews to decide who will be hired. Some universities appear to receive CVs all year long.
Each time, there is a limited quota of positions that can be filled. Universities in China are feverishly expanding their campuses into new locations, and are desperate to hire faculty members who can push the university up on the THE rankings or QS World Rankings. Key points are to publish as many papers as possible in SSCI journals, and attract foreign staff from top international institutions.
Regarding rank names, this "pre-hired/long-term hired" system has two variants: for the sake of convenience, let's call them the "old system" and "new system".
I do not recall the exact Chinese names of all the ranks, so I will just write the English ones.
**Old system, from low to high rank:**
(5) Adjunct Researcher/Post-doc
(4) Assistant Teacher (equivalent to lecturer/instructor in USA, adjunct position)
(3) Lecturer (roughly equivalent to an Assistant Professor in the USA)
(2) Associate Professor
(1) Full Professor
**New system, low to high rank:**
(5) Vice-Researcher (equivalent to postdoc)
(4) Associate Researcher (roughly equivalent to Lecturer, but usually with better salary, this is a tenure-track position)
(3) Assistant Professor (can be both tenure-track or a fully tenured position)
(2) Associate Professor (can be both tenure-track or a fully tenured position)
(1) Full Professor (can be both tenure-track or a fully tenured position)
In order for you to be accepted into a tenure-track position, you are required to have obtained a PhD from one of the top 200 institutions in the World University Rankings, and must have spent at least 2 years of research experience in a university or research institution. These two years can be spent either as a "Post-Doc", or as a position with a similar title such as "Researcher", "Assistant Researcher", "Lecturer", etc. This requirement is pretty strict, although with the right connections, it may be possible that some exceptions can be made.
What normally happens after you sign the contract as a post-doc or as a tenure-track faculty member, is that you are expected to publish as many papers as possible, etc., and at regular intervals, you will have to officially apply for a higher position. If your achievements are considered enough as a postdoc, you will get a tenure-track position, which will give you a maximum of 6 years to re-apply for a fully tenured position. If your performance is deemed insufficient, your contract will not be renewed, and you must head elsewhere.
Note that, after you sign your contract, your job title will only say something like "pre-hired teacher" or "pre-hired post-doc", which is a meaningless title. In many universities, after you sign the contract, you actually need to make a formal paper application in order to obtain an official job title such as "Assistant Professor".
Because the population of China is shrinking and money is dwindling, the whole application process is brutally competitive: in a single application season, I have seen as many as 600-650 applicants for approximately 10-15 tenure-track positions. 99% of these applicants are mainland Chinese, many of whom are quite well-prepared. If you do not have any papers published in Web of Science journals, you will have a very hard time being accepted in a top Chinese university, even as a post-doc.
Now, your mileage may vary, and I don't pretend to claim that my experience is valid for all the hundreds of universities in China, but here are some words of advice nevertheless.
If I were you, I would stay well clear of any up-and-coming university, and only focus on the top 5 universities of China in the cities of Beijing and Shanghai (especially since you mention you are from a R1 institution). There are numerous agents and representatives from up-and-coming universities very eager to hire you, but you will quickly find that many of their promises in terms of job conditions and salary treatment are empty or very misleading. The only exception I would make is if you already have a well-placed connection working in one of those universities, who can look after your back and fight to get you the proper conditions you were supposed to get. Representatives might show you nice photos of a clean university apartment, until you realize that place is in a newly-built campus in cheap land located in the middle of nowhere, and you actually need to take a crowded bus for 90 minutes (one way trip) to get to your actual workplace. It is these kind of things that you must take special care to watch out for.
My advice is for you to aim for a top institution that already has a few tenured foreign faculty in their ranks. Believe me, you do not want to be the first foreign hire of a given department. You will waste days and weeks running around various campuses and through the various administrative offices of the city, facing an army of clueless staff who have no clear idea how to process your paperwork as a foreigner. They will call lots of different people, and you might have to return to the same place 2 or 3 times because what a certain staff thought was the correct way to submit the application was actually incorrect, and you will need to rewrite the whole thing.
Since Chinese universities are leaner on administrative staff compared to the USA, you will likely be assisted in your paperwork by lots of different students who will go with you to the various offices around the city. They are wonderful, hard-working kids who are eager to work and learn from foreign faculty, but you will feel sorry for them taking so much time from their own studies to help get you registered, get a bank account, etc.
If you go to a large international city such as Shanghai, and if you choose a top university that already has foreign faculty, then it is much more likely that they already have some form of pipeline to handle your procedures, because practically none of the official paperwork you will have to do in China has any (decent) English translation. Until you master the Chinese language, you will be severely dependent on the aid of students and other faculty.
Another advice is to actually send an email to the foreign faculty in that department to get a clearer picture of the actual work/life conditions there. If you are lucky, one of these professors might offer some actual support for your candidacy, and look after your back while you are there. If you don't have friendly connections that you can trust, life in China can be very isolating and lonely.
Be sure to save as much money as possible before applying. In my case, because of the ridiculous amounts of bureaucracy and paperwork involved, it happened that I had to make a "work" visa to go to China and sign a temporary contract, and then leave the country to reapply for an actual visa that gives you not just a work permit, but also an official residence permit. Once you return to China, then you will sign the actual final contract. Unless the country's rules have changed recently, the same situation is likely to happen to you (be sure to inform yourself about this in advance). Also, because the paperwork for opening a bank account and actually getting paid is so cumbersome, it can take as much as 3-4 months until you actually start receiving a salary.
Finally, you will have to navigate the politics of your new country. You will likely be expected to sign a declaration of ethics, which along with traditional promises of not engaging in falsified research, you will have to check a box where you state that you agree and comply with the Party Chairman's vision for education in the country. There are lots of unwritten rules about what kind of research is "safe" to do, or whether you need to ask special permission to publish a paper that uses a certain dataset, but for the most part your Chinese/foreign colleagues will help you to understand what is safe or not, and as long as you are cautious and follow their counsel, you will probably find that there is very little (if any) intrusion in your research/teaching activities.
If you use your time wisely in China, and spend it on:
* working your ass off to get papers published in Web of Science
journals with high impact factors;
* getting a basic, working understanding of the Chinese language;
* applying for a successful government funding of your research;
* helping your Chinese colleagues and students to get their papers
published as well in good venues;
Then you will find that you can rise in your career much faster than in the USA (at least twice the speed, if not more).
However, if you are the kind of person who gets homesick easily, or cannot adapt to the culture or food, or cannot deal with being by yourself for long periods, or tend to spend your time in parties with alcohol (you will probably get lots of invitations, be wise about avoiding them as much as possible), you will probably have a dreadful experience. It's all up to you.
I don't really wish to go any deeper than this, and I hope others can give advice that is specifically suited to your needs. Good luck in your career.
Upvotes: 4
|
2021/07/04
| 1,063
| 4,086
|
<issue_start>username_0: I'm currently trying to start a bachelors in Germany as an international student. However, I have to decide between going to a normal university or a so-called Fachhochschule. According to what I have found, studying at a Fachhochschule is more job-oriented and practical instead of theoretical. So that's what I like more.
After finishing my bachelor, I would like to continue my studies in Canada or the USA. I am unsure, though, if I can continue with a masters or PhD in the USA with a bachelors degree from a Fachhochschule. Does anybody have any information or idea? Is the Fachhochschule degree valid in the USA or other countries like Canada?<issue_comment>username_1: From my personal experience both in the US and in Europe, a Fachhochschule will be seen as technical education and not equivalent to a Bachelors. I believe there is a now a route to convert your Fachhochschule degree to a Bachelors equivalent, but don't remember the details or if its true. Personally, if you are thinking about further education, I would go the Bachelors route, since the Fachhochschule will limit your further options.
Upvotes: 2 <issue_comment>username_2: >
> According to what I have found, studying at a Fachhochschule is more job-oriented and practical instead of theoretical.
>
>
>
The followings are from my direct experiences (I got a B.Eng. Maschinenbau from the FH Aachen, but also spent 3 semesters at the RWTH Aachen):
At a *Fachhochschule (FH)*, most subjects require students to complete a *Praktikum* as a pre-requisite for the exam. Students have to work on hands-on tasks, document results, write a report, and then submit it to the TA or directly to the professor.
At a *Technische Universität (TU)* or a normal *Universität*, chances are you only learn theories and take exams. Some professors might offer an excursion with very limited spots to one of his labs, but you can only stand and watch, which IMO is useless.
At a *Fachhochschule (FH)*, subject contents are not as broad as those at a *TU*, but everything that is taught in class will be asked and tested in the exams. Exam questions require you to apply your knowledge to specific situations. None of them ask you to prove a formula.
At a *Technische Universität (TU)*, contents are, let's say, 150% of those taught at an FH. You have to learn everything, even those that won't show up in the exams. Exam questions are a mix of applying knowledge and proving the theories.
Furthermore, the FH Aachen's curriculum also requires students to complete 2 project-based subjects called *Projekt 1* and *Projekt 2*. In Project 1, every student is divided into groups and each group receives a task from a guest, which usually is a local company, and within 1 week they have to work in teams to come up with solutions and each team has to present the solutions in front of the rest including a group of professors and one jury from the company. Project 2 is a bit more independent. Students are free to organize their own groups, but this time each group receives a task from the university. These tasks could come from the projects that current Ph.D. students are working on, and they have around 3-4 months to solve the problems. When I took part in these projects, almost every fundamental concept and skill that I previously learned was brought to use. These are mechanics (all three), materials, thermodynamics & heat transfer, electrical engineering and electronics, fluid mechanics, and 3D CAD & simulations. This kind of activity is not offered at the RWTH.
Lastly, to your question about a degree from an FH, it depends. According to Stanford's eligibility for graduate admissions, [Germany's three-year Bologna-compliant bachelor’s degree is accepted.](https://gradadmissions.stanford.edu/apply/eligibility) UCLA, on the other hand, explicitly says that [Holders of the Vordiplom, Zwischenprüfung, Bachelor/Bakkalaureus, or a diploma from a fachhochschule are not considered for graduate admission.](https://grad.ucla.edu/admissions/required-academic-records/)
Upvotes: 2
|
2021/07/04
| 690
| 2,644
|
<issue_start>username_0: I continued with a postdoc with my PhD advisor on a tangential topic as my PhD work.
I spent the first 6 months in getting my PhD papers out. I have 4 papers from my 5.5 years PhD and 1 first authored paper with a collaborator. In total, 3 of my papers came out last year. I have also written 1 more paper that I will be submitting this month.
I am writing another paper which I am hoping to submit by August end.
However, I have not worked hard or gained any significant knowledge from the postdoc. Not because I am working with my PhD advisor, but due to constant self-doubts and negative thoughts. I have just worked 30 or less hours a week for the past year.
My friends in academia are getting tenure track positions and jobs in industry, while I have got rejections. I have lost drive and ambition to succeed and survive. I have made a mistake getting a PhD.
I have wasted my precious 1.5 years. Is my career salvageable?<issue_comment>username_1: Yes to salvageable (no to doomed), in fact, in many fields this would seem like a good record. It isn't unusual to follow dead ends as a researcher. If everything were assured then it wouldn't really be research.
But, you need to find a way to get your mental sense more positive. For many this means talking to a professional. For some, a break is all it takes - South of Spain or France, perhaps, or the Norway fjords. But you should deal with that explicitly as a high priority for a while.
I'll predict that your record looks better to others than it does to yourself.
---
Note that many universities have a counseling center that is adept at discussing such things as burnout, self doubt, imposter syndrome, self-defeating behavior, etc. See if yours does.
Upvotes: 3 <issue_comment>username_2: First, just because you haven't gotten any faculty offers yet doesn't mean you never will. I recently attended a panel with 6 new faculty hires for fall 2021, and of those 6, 4 of them were in their 3rd year of a Postdoc (I believe the other 2 had a 2 year postdoc). So it's definitely not unusual to be in your position; actually it seems more like average to me. So it's not realistic to say your career is a 'mistake' just because the job offer hasn't come yet.
Also, working 30 hours a week doesn't make you 'lazy', even if toxic culture suggests otherwise.
Take care of your health, mental and physical. Keep making progress on your research. Be ready for the fall 2022 hiring cycle. I might also offer the idea of joining a different lab, since based on your previous question you've been with the same lab for over 7 years at this point.
Upvotes: 3
|
2021/07/04
| 700
| 3,127
|
<issue_start>username_0: When producing a list of publication for a CV, it is sometimes useful to append the impact factor of the journal in which each publication was published.
The question is: should one use the impact factor of the year in which the paper was published, or the most recent one?
For example: I published one paper in 2018 in the Journal "A"; should I use the 2018 IF, or the most recent?
I know there are good reasons behind both choices, but I was looking for some more "definitive" guidance. Thanks to everyone.<issue_comment>username_1: It is NOT common practice to include impact factors on a CV. Indeed, if anyone sent me a CV with impact factors, I would immediately question their competence.
If you are in some specialized situation where they are required, then you should inquire of the people requiring them.
Upvotes: 3 <issue_comment>username_2: The correct use of impact factors is not to use them. But if you are going to use them anyway, normal practice is to use the most recent available impact factor.
You might object that the impact factor at the time of publication is more relevant. That is correct. But people use the most recent impact factors because they are usually higher and easier to find.
Upvotes: 2 <issue_comment>username_3: While not universal this is actually a pretty acceptable practice in some fields, especially in the life sciences, where there are may subfields and your hiring committee may not be familiar with reputable journals in your field. I think it’s okay to include them.
This is actually an interesting and thought provoking question as impact factor for a journal does change over time. Personally, if relevant I’d use the most recent impact factor (usually for the last 5 years). My justification for this is 2 reasons: a) citations are cumulative over time b) this is what journals tend to use to reflect their current reputation. The citation from the 5 years before your publication may have informed your decision at the time but aren’t that relevant anymore as journal reputation can change and the 5 years preceding your publication didn’t include citations to your work.
It’s also worth noting that impact factor is not an ideal metric. While it is heavily used for hiring and promotion decisions and that should be acknowledged. It wasn’t its original purpose, the metric was designed not to measure quality of the work involved but for libraries to decide which journals are the most popular or important to subscribe to on a limited budget. This is inherently biased towards older established journals, controversial or hot topics, larger subfields, and review articles. It is also difficult to generalise as quality of research published within the same journal can vary widely, as can citations, and they’re not necessarily the same either. Journals newer than 5 years old cannot have an impact factor by definition. Some open access journals such as *The Journal of Open Source Software* have decided [not to have one](https://github.com/openjournals/joss/issues/721) as it goes against the journal ethos.
Upvotes: 3 [selected_answer]
|
2021/07/05
| 287
| 1,300
|
<issue_start>username_0: I got this feedback for my thesis paper. Can anyone explain this feedback with details?<issue_comment>username_1: More context would be helpful. It is possible that your result is simulation-based (such as a bootstrap CI, simulated permutation test, or simulated P-value for a chi-squared test with sparse cell counts). Then the referee may wonder whether results are affected by your particular simulation.
In that case, you might (a) show the seed (and software) for the simulation, (b) estimate simulation error, (c) show results of two or three additional simulations, and/or (d) if feasible, use a larger number of iterations in the simulation to make simulation error smaller.
In any simulation, it is good practice to include at least (a) along with (b) or (c).
Upvotes: 1 <issue_comment>username_2: This sounds like machine learning, where the result of training a model (often neural networks trained with gradient descent) are known to depend on the random initialization of the network parameters. A quick google search will help you find much more details.
Just re-run the training multiple times with different random seeds (you should manually set them so you can be sure they are different) and report what happens as suggested.
Upvotes: 3 [selected_answer]
|
2021/07/05
| 1,045
| 4,624
|
<issue_start>username_0: Going through the computer science PhD applications process for US schools, I've heard two conflicting types of advice/anecdotes:
1. While they have a certain broad area in mind before starting their PhD studies, PhD students gravitate towards faculty members working on specific problems that interest them during their beginning years.
2. During the admissions process, potential PhD students are interviewed by the professors they want to work with. In this case, their admission seems to depend on whether that specific professor is recruiting PhD students for that year or not.
This gave rise to a few questions:
Are there two implicitly different tracks to being admitted as a PhD student? This can also be thought of as the difference between applying straight from undergrad (will take courses during PhD) and with an MS degree (can skip some courses).
In other words, are students with different levels of "research-readiness" evaluated differently?<issue_comment>username_1: In most fields in the US including CS (especially theoretical CS), you are admitted to a "program", essentially to a department. The decision about an advisor comes later. If you enter with a BA/BS then it can be much later (years). The first task is to take whatever courses are necessary to get you through prelim/qualifying examinations, which are all on advanced topics. Even some MS entries may need some courses and need to pass quals. About that time (bit earlier or later) you make arrangements with an advisor for your dissertation work.
In some scientific fields the PI decision may need to come earlier, but, perhaps, only in general - with a group or lab, rather than an individual.
But, yes, they are usually separate decisions. In some places in Europe it is quite different and "admission" to a program is the same thing as being "hired" by a PI. It isn't like that in US. A few situations are more like the European model, with a grant funded PI who is able to "admit" doctoral students into their lab or working group. But this is rather rare in US and most funding and admissions is at the department level. See the answer of username_2 for more on the alternative model. Applied fields might be more like this, including Applied CS, since a PI might have funding to work on a specific problem.
Until you pass quals in US, all agreements with PIs/advisors are tentative and conditional.
---
Funding: On the standard US model, most students are funded as TAs or (less frequently) RAs. This includes forgiveness of tuition fees as well as a modest stipend sufficient to live on. Getting such a position is, again, usually up to the department, not the individual professor. Large departments that teach undergraduate courses (math, CS, ...) need a lot of TAs.
A few students are funded by employers, but this seems to be rare currently. It was more prevalent in some places in the past. And, a few international students are also funded by home governments.
Upvotes: 4 [selected_answer]<issue_comment>username_2: To add to @username_1's answer, a lot depends on where funding comes in your research area.
If a department has funding for students, through assigning teaching positions or training grants that fund students directly, students will be admitted per program. It is still up to professors whether they want to take on particular students as their mentors, which may depend on all sorts of things - how many students they have already, whether they think the student is a good fit, etc.
If individual professors have funding for students, then this can present an additional hurdle or an additional path to admission. Professors may be limited by the funds they have to pay graduate students, so a student can't work with a professor that has no available funds even if the professor wants them (unless the department can pick up the slack). Additionally, there may be students that a professor is willing to fund but who miss the cut for department-level funding. These students may be admitted directly into that professor's lab (typically they still need approval from the department, but it's common for the "number of students we would love to have here" exceeds the "number of students we can afford to have here"; it's students that fall in the first but not second category that this applies to). In my own area of study (neuroscience) this path seems especially common for international students, because some of the other funding sources for training students are limited to domestic applicants.
Programs you apply to should discuss these possibilities.
Upvotes: 3
|
2021/07/06
| 2,520
| 10,609
|
<issue_start>username_0: Several years ago I taught an upper-division Mathematics course at my former institution. In the course of preparing to teach the course, I discovered that a professor at another university had not only written a text that would be useful to my students as supplemental material, but had also posted a pre-publication version of the same text to his personal (department-hosted) website. In the syllabus, I told my students:
>
> [Name of text and author redacted] This text may be purchased online,
> e.g. through Amazon.com, but it is somewhat pricey. Fortunately, a
> preliminary (and almost complete) version of the text may be
> downloaded for free from the author’s website at [website redacted].
>
>
>
Jump forward about a decade, and I am preparing to teach a version of the same course again at my current institution. It seems that the free, pre-publication version of the text is no longer available on the author's website.
Would it be unethical of me to distribute the copy of the preliminary version of the text that I downloaded (licitly) ten years ago? On the one hand it seems almost certain that the author would prefer students buy the commercially-published version (but since it would be only a supplemental resource in my course, I would not require it in any case). On the other hand the author himself released the text into the wild, and gave permission to people to download it. How should I think about the ethics (and, I suppose, the legality) of this situation?<issue_comment>username_1: My guess is your professor colleague knew the pros and cons of what he or she was doing, including the downstream implications. The later publisher would have understood the situation too. Dust in the wind.
Upvotes: -1 <issue_comment>username_2: The author took a willful, deliberate action of removing the draft from its publicly available location online. The only thing we can infer from this is that he has revoked any implied approval he may have previously given for anyone to download the draft. By the way, giving the right to people to download something does not automatically imply a right to share it; not legally under copyright law, and certainly not ethically. For example, anyone with a bit of tech savvy can download a YouTube video clip of the latest Taylor Swift hit song, but if they post a copy of the video online, you can be sure that they will get in swift legal trouble.
Conclusion: you should not share the draft. You were within your rights to download it, and having downloaded it, have a right to use it for personal use. But unless you were *explicitly* given the right to share it with others indefinitely into the future, you should not be assuming such a right.
Upvotes: 7 [selected_answer]<issue_comment>username_3: If the author can be contacted, ask them politely whether you may distribute the manuscript that used to be available online, explaining the purpose of your request and the size of your class.
Perhaps you will get a positive reply or even an updated version of the file. Perhaps the author will reject your request. The answer you get is the answer you get.
If your request elicits no response, default to not distributing the file. After all, your question here mentions a few reasons why the author might be unable or unwilling to say yes.
Upvotes: 6 <issue_comment>username_4: To quote Daniel's comment "Did the copy you downloaded ten years ago state its licence terms?" If the pdf says that you can share it, then you can share it, *even* after the link has been taken down. If it says otherwise (or more likely, says nothing), then you legally aren't allowed to share it.
Upvotes: 5 <issue_comment>username_5: You ask "**ethical**", and asked on **academia**.stackexchange.com, rather than **law**.stackexchange.com. This makes this question considerably more complex and interesting than it seems at first sight.
---
Legality
========
As Daniel and username_4 point out, if it's explicitly permitted under licensing terms, then it's almost-definitely **legal**.
If you are confident that you are definitely violating copyright law, perhaps because you are charging money for copies without permission or something, then it's almost-definitely **illegal**.
Between the two is the grey-area **legal crapshoot** called "fair use", which is where you can essentially ignore copyright until the copyright holder comes after you with lawyers, at which point you enter a legal gambling game, where the one with the deepest pockets wins. You either settle out of court and agree to stop, or you go to trial, win or lose, then it gets appealed, you win or lose again, and all the way up to the supreme court or the person with the least money blinks and agrees with the decision. Whether you win or lose is *significantly random*, but the odds can be pushed to one side or the other depending on your situation.
Firstly, free educational use can be protected under the first fair use consideration of copyright law. You aren't charging for access to the resource when sharing it, so it's arguably nonprofit; and you're sharing it for educational purpose. So far so good.
But sharing a whole work is *not* usually considered fair use. Sharing any significant parts of the work that were irrelevant to what's being studied would be a copyright violation. However, if *every part* of the work that you shared was relevant to the course, you *might* be legally OK even if you shared the whole work.
So, like I say, it's a legal crapshoot. Definitely check with your institution's legal dept to see if they're willing to let you take this gamble, as both you and the institution may be on the hook if you're found to violate the law under their auspices.
So that leaves you in one of three states: legal/illegal/crapshoot.
---
Ethicality
==========
Establishing legality doesn't answer the question asked.
What you're asking, instead, is whether such behavior would be a violation of ethics, which I interpret to mean **professional ethics**. That is, the explicit standards of behavior for your profession.
Typically, unless there are powerful conflicts between laws and ethics, professional ethics require one to act within the constraints of the law. Violating copyright to save a few bucks wouldn't typically be considered a powerful conflict: no lives are lost if you do not share that preprint. So, if you're confident it'd be illegal, then it'd be **unethical**, too.
On the other hand, if the author has explicitly stated, in a license, that they are OK with distribution, then you can be confident that you're complying with their wishes, and your actions would be both legal and **ethical**.
As always, it's in the grey area that it gets interesting.
To establish whether your ethics support taking the risk, you need to establish what weight your professional ethics place on:
* saving money for students;
* ensuring students have access to necessary materials;
* preventing outdated and incorrect material from being propagated;
* ensuring publisher profits;
* ensuring author profits;
* respecting author wishes;
* leading your students by your example;
* minimizing potential liability for your institution;
* minimizing potential liability for yourself;
* minimizing potential liability for your students;
* giving access to this specific work rather than similar ones;
* not writing your own replacement work;
* morality of action and inaction;
* doubtless more I'm forgetting.
The weighting you give to these factors, combined with the advice from your institution's legal dept, should point you towards a yea or nay.
---
Morality
========
You didn't mention personal morality in your question, but it is arguably an aspect of ethics. Like with legality, professional ethics typically require one to act within the bounds of personal morality.
But like legality, there are areas of conflicts. We might help someone we are not legally permitted to aid, or in a way we are not ethically permitted to, because we feel morally obliged to act.
Consider a case where you know all your students to be too poor to buy the book, are too poor yourself, your institution is not willing to buy copies for its library, and you are unable to find a sponsor to buy copies; and you are unable to change the curriculum; and you know that this is the *only* book suitable for teaching the curriculum; and the book is so large or your skill so poor or time so short that you are unable to create teaching materials of sufficient volume and quality to replace it.
In such a case, you arguably have a moral imperative to ensure that your students can learn, and that there is at least a route for them to pass your course.
Some professors appear to navigate this maze of morality, legality and ethics by giving their students a warning not to obtain the book illicitly with a wink and a nod, as you see in the comments to the OP here suggesting using the Wayback Machine, or in [this widely-spread tweet by a University of Ontario professor](https://twitter.com/PaoloAPalma/status/1311373463383347201) recommending that students avoid a list of free textbook sites.
This places the ball in the students' court, giving them the information they need to pass the course illicitly. Some will miss even such an obvious message, but enough would get the point that the message will spread to most of them.
Some may choose to remain within the rules, and fail, learning that strong morals or ethics are punished. This might not be a lesson you wish to teach.
Others will learn that rules are made to be broken, and so will develop a similar attitude towards things like plagiarism. This too might not be a lesson you wish to teach.
Beware of such unintended consequences.
So, as with ethics, morality is a matter where you have to weigh the pros and cons, direct and indirect, and identify a path for yourself that navigates through your own personal maze of conflicts.
My personal feeling is that the various costs of distributing this preprint would be far too high, and I would not do it, if I were in your place. But that is my morality, and my ethics.
Upvotes: 4 <issue_comment>username_6: The ethical and legal concepts have been covered already. A solution to this is to request for the book, or a few copies of it, to be stocked in your university library. Students are able to photocopy a few pages at a time out of a library book, and even if not, it can be simply marked as a library read-only/un-loanable book, making it accessible to all students to read at any point.
Upvotes: 1
|
2021/07/06
| 6,060
| 26,106
|
<issue_start>username_0: I got my PhD in [data engineering](https://en.wikipedia.org/wiki/Information_engineering) in 2014 from a reputed university in a developed country. I am now an associate professor in a well-reputed university. Between my PhD and my current position, I took several postdoc positions in very good universities in several countries.
The papers with which I got my PhD (with distinction) are problematic. These papers were published in very good journals and proceedings. However, I didn't conduct the experiments as I mentioned in the papers. They aren't fully falsified, I had assistance and some are true only for specific parameters. I knew this when I published.
More than two years, I began to feel terrible about this. I feel that my PhD is fake and everything I am doing now is based on academic misconduct.
I now excessively try to make everything very correct in my papers and to be extremely honest in presenting the results. I often disagree with my coauthors when they want to publish a paper with weak experimental validation. They always argue that at this rate, we can publish only a few papers and we can never compete with our fellows.
I don't think this will ever be discovered. Even if you execute the experiments, you always find different results, the details of the approach are not given in the papers so any missing details can change the results.
What should I do now?<issue_comment>username_1: Separate two things: The guilt over having obtained your PhD in a dishonest way, and the guilt over having falsified papers in the literature.
First, the PhD. While you may not have deserved your PhD at the time you received it, it seems that you have accomplished good work afterwards. You are qualified for your role now, and there isn't really anything you can do to fix the past. If you need to atone, consider making a sizable (compared to your disposable income) donation to charity - maybe something helping disadvantaged students to access university education (in case you feel guilt over having "taken" a spot from someone else)?
Second, the papers. This is a different situation, because the continued presence of falsified results in the literature is an ongoing harm. You could perform the experiments now, and issue a correction with the actual results. If you have omitted any restrictions from the paper, issue a correction stating those. If it is not possible to fix a paper, you need to withdraw it. These actions obviously come with risks to your reputation, but there is no way around them if you want to do the right thing now.
Upvotes: 6 <issue_comment>username_2: Your guilt is rooted in the fact that you completed a PhD without taking the time to properly (according to your standards, standards that are higher than your colleagues and mentors at the time) complete the papers.
So you feel your current professorship is undeserved.
Ask yourself: if at the time you had enough time and resources, would you publish proper (according to your standards) papers?
From what you describe, I would say yes. And this is the most important teaching you have to share and enable with the M.Sc and PhD students you are tutoring, this is what makes you valuable in the academia. You learned it the hard (wrong?) way, by doing it, make this step unnecessary for others.
Give your students resources, give them time, take into account for each PhD a buffer of 6 months to wrap up (additional to whatever length is average in your country) read the papers from candidates to the positions you have open, do not simply read their h-index/citations or other metrics that are there just to be gamed (and the way you published during your PhD is the consequence of these metrics ... it would be extremely irrational not to game them).
Regarding doing science, you are evidently on par with the others, since you obtained various postdocs and you landed a professorship, so you did not take a valuable spot.
Final note:
>
> if you execute the experiments, you always find different results and
> the details of the approach are not given in the papers
>
>
>
So your papers are very weak, or even inconclusive: these papers are needed in science, because they can trigger discussion. Is this a waste of money? in the world where *results* are the goals of science, yes, and if you feel like that you would find yourself better aligned working in the industry, in a world where *science* is the goal of research results, one cannot expect to have high impact results from all the funded research...
Upvotes: 4 <issue_comment>username_3: >
> obtained with some assistance and some of them are true only for some specific parameters
>
>
>
>
> the details of the approach are not given in the papers so any missing details can change the results
>
>
>
If these are the worst parts of the papers, don't worry too much. It is a widespread problem in science that cherry-picking of "statistically significant" results occurs, and that reproducing results is difficult.
And indeed it is a big and important problem, but you are already on a good path by recognizing the shortcomings of your prior work.
**I would suggest that you check if your prior publications get referenced in other work.** If not, don't worry - the results apparently weren't very important, and the main result was your personal learning.
If they do get referenced, read the new publications to see if the shortcomings in your work could affect them. If it does, *then* you should work on making a follow-up that will point out circumstances under which the original research is valid, and also that will make it possible to more easily reproduce the results.
Upvotes: 3 <issue_comment>username_4: Cutting to the chase here, you cheated and you obtained very large benefits from your cheating. The first question, surprisingly unexplained by you, is exactly why you did this. Please say something about this.
Cheating carries unsuspected penalties. Building a career in academia is grueling and exhausting. People who jump through all the hoops successfully end up with increased strength of character, determination, and persistence. In fact they change remarkably during those long years of training.
When you cut corners or cheat to get ahead, you eliminate yourself from the group of people who came about their credentials the hard way and built the skills and character needed for a successful career. The demands don't stop with the PhD; for many people things get harder after that when they're faced with the larger demands of tenure, promotions, and more sophisticated research and publications. The competition continues too, such that those who aren't willing to put in the work or who don't have superior skills (because they didn't do their own academic work earlier) are culled from the pack. When things get really tough, those who learned honest toughness at the lower levels of their training pull ahead of the rest.
As you've found out, the psychological burden of cheating is large. There's small guilt and Big Guilt. How big is the fault in your case? Well, it's enough to get you ejected from your career and to have your PhD revoked. I'm really wondering why these thoughts didn't cross your mind when you were contemplating cheating. Please say more about this.
What to do now is a moral and philosophical question about your basic beliefs about life. For that reason, I suggest that you post your question in a moral and ethics forum or, better, get guidance from a skilled person like a therapist, life coach, minister, etc. I think you should start by asking yourself, "Why did I do this?" What does your conduct say about you? Are you happy with that? Next I would assess whether actual harm was done to others and the extent of the harm. Then, is it possible to make reparations to those who were harmed? Finally, ask how you can make this right within yourself. Many people have committed serious errors. They must find a way to make peace with their actions and move on. You will never forget this error you've made, but perhaps you can come to terms with it by correcting whatever flaw within you allowed it to happen and by accumulating enough positive actions to offset it.
At the extreme end, the hardest question for you is whether you would be willing to set everything back to moral zero by admitting your mistake to your PhD institution and accepting the consequences, which I think would be grave. This is a complex moral question that can only be answered with considerable introspection.
Upvotes: 3 <issue_comment>username_4: Reading your post again, I see that your position is that you've reformed yourself and you're now doing honest work. What concerns me about your reform is that it was done out of fear---"I got a phobia." Though you say that your misconduct (your word) can't be discovered, it appears that your fear is based on the risk that it will be discovered and that you will suffer the consequences. You ask to not be judged. It's hard to withhold judgment when you haven't expressed any specific moral regret about your actions.
Data engineers deal with things that directly impact public safety, like self-driven cars. Now imagine that your surgeon, who's about to repair your heart valve, got his medical degree the same way you got your PhD, by not doing proper research or falsifying his studies. Hence he didn't really get the training he needs to operate on you. This is a problem, no? If he teaches in a medical school that's a bigger problem.
Since you're convinced that your misconduct can't be detected, what exactly is the problem in your eyes? You seem to be looking for exoneration. What exactly would justify exoneration in your case? Is there some special circumstance that would exempt you from the usual penalties in cases of academic misconduct?
Upvotes: 1 <issue_comment>username_5: Support reproducibility
-----------------------
First, I subscribe to @username_1's answer and I have already upvoted it. I am also in data science at a reputable Western university and what he/she advises are the standard practices in this field. I would just add that, if you find that the journals where you have published your papers do not accept corrections, you can post the corrections on your web page, along with links to the original papers.
Moreover, you may want to learn about the **reproducibility movement** and promote it more systematically with your collaborators or in the classes you teach. For instance, I teach a project class about research practices; in it we also discuss publishing reproducible research. Advocating for reproducible research will not fix your previous papers, but will improve your field of research as a whole, and will make it less likely that the misconduct you committed is accepted (see e.g. Donoho's paper below).
One **classic article** I use is
[An invitation to reproducible computational research by
<NAME>](https://doi.org/10.1093/biostatistics/kxq028). In machine learning, from 2019 on, conferences have introduced reproducibility checklists and reproducibility challenges, as shown in [this paper](https://arxiv.org/abs/2003.12206). For a behavioral research themed discussion you can look through the posts of [datacolada.org](http://datacolada.org/). I am citing the last source because psychology was one of the first fields where the reproducibility crisis was openly recognised, and you are more likely to find concrete suggestions.
Hope this helps. Science makes mistakes, but it corrects itself. Humans make mistakes, but they can learn to become better. [Please allow for the following less than objective insertion: I
respect you very much for caring enough to do something about research ethics.]
I also suggest that, if you aren't doing it already, you take on peer reviewing as a service to the community. At the serious journals and peer reviewed conferences, your work ethics will be valued.
Upvotes: 4 <issue_comment>username_6: The way you have framed this question, you make the "black and white" answer seem to be: come clean and announce to the world publicly that you committed some kind of misconduct with the papers you wrote and issue a correction.
However there are several factors here that I am reading between the lines that make me doubt that the "black and white" answer really applies here.
1. Guilt -- especially guilt over something that has been festering -- can skew your perspective on how important something is. It's not clear to me, in objective terms, how serious the ommission is, but my suspicion is that it is fairly minor. To me, "major" would mean that the main result of your paper is not reproducible even in principle; "minor" would mean that you did not provide enough information to fully reproduce the results but that the main conclusions are correct and reproducible (perhaps with some additional input that can't be reconstructed from the text of the paper).
2. Our own faults are very clear to us, while the faults of others are often less obvious. Are you sure that your assessment of your work is fair, in the sense that other papers in the literature do not contain similar omissions/simplifications? This is particularly the case since according to your description, your standards are higher than many of your peers.
3. Science can be brutal, and we are hard on ourselves. People make mistakes, and deserve the room to learn from those mistakes. Be kind to yourself.
Furthermore, it's important to be accurate in your assessment of this, because "going too far" in the direction of "apparent good" can lead to damage to your reputation -- if you tell people you are guilty of something, people will tend to take you at your word. Admitting to falsifying results in a paper, even approximately falsifying them, is a pretty major offense in the scientific world.
I think you need to evaluate as honestly as possible, without guilt, whether the papers currently in the literature actively causing harm. Are people citing these papers? Are they building experiments based on the parts you know are wrong? And... realistically, do you believe your omissions are significantly larger than others in the literature in your field? If not, then let it go. I promise you, you are not the only one who has done something like this, and from your description I seriously doubt that what you did is the worst thing that exists in the scientific literature.
If there are really wrong results in those papers and those wrong results are being used as a basis for follow-up studies today, then that does make the situation more complicated. But I would still proceed cautiously. I would publish an erratum or some updated paper addressing the issue, but I would strongly recommend to keep the focus on the science, and not admit fault or falsification of the results. Just state that you became aware of errors or omissions and you are submitting a clarification.
The problem you have is that by raising the issue *now*, particularly if you frame the issue as something serious like "falsification", years and years after the paper was published, is that (a) people will wonder why you are brining it up (they may question your motives), and (b) you bring attention to the issue. If harm is not actively being caused, and your guilt is causing you to overestimate how significant the actual offense was, then there is a serious risk that the net result will be self-sabatoge without helping yourself, anyone in community, or, I would argue, even "the cause of justice" in an abstract sense.
You seem to be a very serious and dedicated scientist. Focus on doing good science and bringing positive and exciting new results into the world. Don't let this consume you.
Upvotes: 3 <issue_comment>username_7: A lot of answers here are suggesting a middle of the road fix - donate to charity, see that you've done good work since etc. quite frankly this will (probably) not work.
If you made the mistake of thinking the papers were of sufficient and publish worthy quality, not knowing of any errors then you could quite easily fix the issue by issuing corrections, withdrawals etc. but for this is not the case.
The issue here is guilt. You feel guilt for your intentional wrong doing. Giving any amount of your income to charity won't help with the guilt, because it doesn't rectify it. Here's what could:
1. Redo the bad work in full, at your own expense
2. Go back to your issuing body and come clean. Explain if there were any circumstances that lead to your poor judgement, but own up. Tell them you have since fully corrected the work and are ready to issue corrections etc.
3. Follow the advice/suggestions/enforcement they tell you to
This will mean you've not only paid for tour wrong doing, you've made good on your previous commitments.
Upvotes: 3 <issue_comment>username_8: Let's break down the aspects:
* You (co-)authored a paper which contains scientific misconduct, which you conducted
* You co-authored other papers which did not meet your standards
* You authored other publications which you PHD is based on correctly
* I assume that, even after removing the tainted contribution, your PHD holds enough merit to justify it
Let's see what you could do:
* Ask the ombudsperson of the institution in question
* Inform the editor of the tainted paper on the nature of the misconduct and ask for retraction if you believe that the core content doesn't hold
* Inform the editor about the fact "that the results are not reproducible" and suggest a correction
* Write a comment (if such a mechanism exists in the publications in question)
However,
* you should try to do this via the corresponding author
* I read between the lines that the ethical standards of you co-authors may be not that high, so be prepared that they will block it or, if you circumvent this by going to the editor directly, try to throw you under the bus, which is more problematic if your PHD supervisor is directly involved (they may try to attack your PHD).
When it comes to your PHD, it depends a little but how much of it was tainted and if it was hard misconduct (falsification, fraud, plagiarism) or soft misconduct (cherry picking as far as usual or expected in the discipline). Imagine throwing away all parts of your PHD thesis affected by potential retractions and see if it still has enough value.
If you are independent enough (e.g. postdoc in different field), most of your PHD is not affected by the really problematic parts you can ignore the personal feeling of your former workgroup, but that also depends much on the location of the university.
Upvotes: 1 <issue_comment>username_9: I don't know what you can or should do here as regards how you feel about things.
But one thing.
Do not become the type of professor who drives other people nuts for "not doing things right".
1. It won't get to the heart of your own guilt - it will just enlarge it.
2. It will cause other people in your department to wonder about you, why you are the way you are and your background - personal and professional. This could lead them to suspect what you already know about your doctoral work and its shortcomings. In academia, shared suspicion can be as damning as proven guilt.
On a human level, have you discussed this with your husband/wife or any close academic friend ?
Upvotes: 2 <issue_comment>username_10: >
> they were obtained with some assistance
>
>
>
That is not bad. Scientific research is usually conducted in groups. If you are feeling guilty about not crediting someone you got help from, consider that if they were relevant enough as contributors, they would have been included as authors (PhD students, in my experience, are not the ones who choose that).
If they were not, it is true that you could have included them in the "acknowledgements" section (if that is used in your field), but that is mostly ornamental when you are thanking people, so you shouldn't worry. You could always thank them, probably even now.
>
> and some of them are true only for some specific parameters
>
>
>
That is not fabrication. If you didn't provide enough information to reproduce your results, it just means that people may refute you some day. If you did provide the necessary conditions, it makes those papers more specific (and less interesting), but they are still valid.
What you seem to be worried is that you knew about these limitations, and decided to "hide" them, and feel that you were dishonest. There's always the option to publish a new paper continuing your research on that topic, explaining how those elements affect your research -be as exhaustive as you need this time!-. That would certainly remove the risk of someone extrapolating your results (although again, if they did, they would just find out that what you published is not as general as stated).
Upvotes: 1 <issue_comment>username_11: From the point of view of outside of academy: stop worrying, fix the problem, and learn techniques to work with pangs of guilt.
* Stop worrying: these things happen. I guarantee you that 99% of all adults have some kind of skeleton in their closet that gives them a nice little adrenaline kick every few years when they remember it. Even if it's just some careless comment they said at an inappropriate moment. Shame and guilt are really common feelings that are built into all of us (or at least hopefully so). Life goes on.
* Fix the problem: don't overthink it. Gather a few ways to tackle it, make it objective and not subjective, pick one way to go, and do it. In your particular case, although I am not familiar with the ways to get add-on articles into those publications, I could imagine you writing further papers which expound the issues from the previous one. Do not bother worrying about somehow "setting it right". Science is based on the principle of creating falsifiable theorems, and then by all means falsifying them when possible. If you really feel that you must comment on it, you can write a half sentence in your preface (e.g., "this paper fixes some systematic errors in the previous one"...). Nobody cares or should care how these errors came up or your personal story. Obviously, don't lie or make up an elaborate story about things nobody except yourself knows anyways.
* Techniques: one technique to work on things like guilt and shame is [mindfulness meditation](https://en.wikipedia.org/wiki/Mindfulness). There are plenty of possible ways to tackle that; the usual approaches are dead simple to perform ("watching the breath"). There is [nothing religious or mystical](http://www.wisebrain.org/papers/MindfulnessPsyTx.pdf) about this; the beauty is that it is fundamentally an experimental approach. You immediately see what happens, you do not have to believe anything at all. You do not need to "make belief" (this is not about imagining energy balls or stuff like that). Granted, I know people who hate stuff like this with a passion, but you might as well see if that resonates with you.
Upvotes: 0 <issue_comment>username_12: ### Exploit your unique\* experience to better the world
\* Ok academic misconduct is definitely not unique but go with me here
I've seen people who change careers, or people who have done regrettable things, do things using their past experience that nobody else would have thought of. Whether it's a dentist that becomes a software developer and solves some major problems in his industry, or a murderer that becomes a lawyer or journalist and fights for a better system, these people bring a unique insight that allows them to do things most people can't.
You would be better placed than most people to answer questions like:
* how widespread is the problem in your field
* what impact does it have
* what factors contribute to the psychology of academic misconduct
* what can be done to change the culture
* what can be done to minimise the harm/detect occurrences
* what is an acceptable level of rigour (i.e. what is the sweet spot between invalid results and slowing down valid scientific progress in the field)
Trying to impose standards on your collaborators seems a bit of a waste of potential. If the system is generating strong pressure to fabricate results, people within the system should be having open discussions about how to improve the system. You could be one of the people driving these conversations across your field as a whole.
### To go public or not?
Of course, going public with what you have done puts your career at risk. It is not something to do lightly. But remember, how you frame things can make a big difference to how they are received. If you tell people you fraudulently obtained your PhD, you will likely be in a lot more trouble than if you tell people you cut some corners you shouldn't have, you know you're not the only one and you want to do something about it.
A useful question to ask yourself is, what would you want someone else in your field to do if they were in a similar situation? Then, perhaps you should lead by example and do it.
Another useful question is, how would you want the academic community to respond to someone else in your field voluntarily admitting to a similar level of misconduct? A slap on the wrist? An investigation? Revoking the PhD? A dialogue? Although there are no guarantees that something *worse* won't happen, at least you'll be able to make an argument for why you think a certain thing is appropriate.
The final question is: what are the consequences of staying silent? Are people who read your PhD papers in turn going to produce invalid results or significantly wrong opinions? Or is it niche/obsolete work that won't affect anyone much?
(Actually there is one more question: would you jeopardise anyone else's career if you admit to the misconduct?)
Instead of going public you may want to write up your story and publish it anonymously, in a place where people from your field gather, and perhaps participate in an ongoing conversation about how to improve the problem.
Upvotes: 2
|
2021/07/06
| 690
| 2,853
|
<issue_start>username_0: I've been checking the websites of different UK universities for teaching assistantship benefits. None of the websites I've visited so far disclose the specific benefits that TAs receive. So, I wonder if it is unusual for UK universities to waive the tuition fees of their TAs.<issue_comment>username_1: I have studied/worked at three different UK universities. I have never heard of tuition fees being waved for TAs. Usually TAs are paid an hourly rate in addition to any funding package/scholarship they may have. There are almost always restrictions on how many hours a graduate student can work at the university, the maximum amount of money you could make with these restrictions would be much lower than the tuition fees. It is also not guaranteed that you get a TA position
Upvotes: 2 <issue_comment>username_2: It's not *unusual* because it doesn't happen at all. I think you have a fundamental misconception of how funding and tuition for PhD students works in the UK compared to the the USA.
Firstly, if you are offered a PhD place in the UK, this should come with the costs of tuition covered and an annual tax free stipend (~£15,000). You never receive any money for tuition and you never directly *pay* tuition. That transaction goes on behind the scenes between your funding body (be that research council, department or PI) and your university. The stipend gets paid into your bank account every month like a salary. That's what you live on. It is completely unconnected to any teaching you may do.
Secondly, the concept of a "TA" does not really exist in the UK. It would be very unusual for a PhD student to teach a whole lecture or lecture course, as I understand happens in the US. The most I had to do as a PhD student was mark undergraduate coursework and exams, which took up a few hours of a single week, twice a year. Once I sat in the corner of a computer lab for a few hours helping the students with Python. That was it. The vast majority of my time was spent on research.
Doing this type of undergraduate marking and lab demonstrating may not even be compulsory for you as a student (it was for me because there were a lot of undergraduates so we all had to pitch in). Regardless of whether you have to do it or not, if you are doing it you will be paid to do it. The mechanism by which this occurs will likely differ between universities, but we filled in a timesheet and got the money along with our stipend each month. The hourly rate was excellent too, something like £17 an hour. But you'd only do a few hours so this kind of marking/demonstrating would not give you anything like enough money to live on. That's what your stipend is for.
I think this is the main difference to the USA, where you are paid primarily as a TA and those duties come first, above and beyond research.
Upvotes: 2
|
2021/07/06
| 2,230
| 9,485
|
<issue_start>username_0: The paper in question is an analysis of government statistics to answer a question in sociology. The analysis was applied to a single government survey of a large population.
The *sociologist* has limited knowledge of statistics. He wrote the introduction, literature review, part of the discussion, and the conclusion. He also listed the variables to be analyzed by a hired statistician.
The *statistician* wrote the entire results section and a summary for the discussion in his own words, accounting for about 50% of the paper. He designed the analysis and ran all of the calculations.
Is it ethical for the sociologist to list himself as the sole author of the published study? What is customary and ethical in this case?<issue_comment>username_1: It would not be ethical to claim sole authorship in this case and the statistician should be listed as an author. It's always a good idea to get authorship sorted out as early in a project as possible to avoid problems later.
In the fields I'm familiar with (STEM) anyone who made a substantial intellectual contribution to the paper should be an author. Their contractual/payment status shouldn't be part of this decision. The [Guidelines for Authorship](https://www.research-integrity.admin.cam.ac.uk/research-integrity/guidance/guidelines-authorship) from the University of Cambridge, and the references cited there, elaborate on this.
From a personal point of view, if questioned on the details of the statistical part, answering that you would need to check with the person who did the work would feel pretty awkward to me if they weren't an author. At least if they are an author they share responsibility for the results, otherwise it's all on you.
Upvotes: 5 <issue_comment>username_2: **The statistician has a very good case for being included, but not absolutely clear-cut.**
The [British Sociological Association (BSA)](https://www.britsoc.co.uk/publications/guidelines-reports/authorship-guidelines.aspx) lists a number of criteria for deserving authorship. The BSA criteria are quoted below (emphasis mine). They are similar to the [Vancouver Protocol](http://www.icmje.org/recommendations/browse/roles-and-responsibilities/defining-the-role-of-authors-and-contributors.html), which relates to medical research but is often referred to beyond medicine.
>
> 1. Everyone who is listed as an author should have made a substantial
> direct academic contribution (i.e. intellectual responsibility and
> substantive work) to **at least two** of the four main components of a
> typical scientific project or paper:
>
>
> a) Conception or design.
>
>
> b) Data collection and processing.
>
>
> c) Analysis and interpretation of the data.
>
>
> d) Writing substantial sections of the paper (e.g. synthesising findings in the literature review or the findings/results section).
>
>
> 2. Everyone who is listed as an author should have critically reviewed
> successive drafts of the paper and should approve the final version.
> 3. Everyone who is listed as author should be able to defend the paper
> as a whole (although not necessarily all the technical details).
>
>
>
If a contributor fulfills each main criterion, they **must** be included as author.
If a contributor does not fulfill each main criterion, they **must not** be included as author.
If a contributor fulfills some criteria but not others, they should be **acknowledged**. (This is how I interpret the [clause](https://www.britsoc.co.uk/publications/guidelines-reports/authorship-guidelines.aspx) "all those who make a substantial contribution to a paper without fulfilling the criteria for authorship should be acknowledged.")
Paraphrasing from the question and comments:
>
> * The statistician wrote half of the paper
> * The statistician designed the analysis and ran all of the calculations
>
>
>
The statistician clearly fulfills criterion 1, as they contributed to design, data processing, analysis, and writing.
Since they contributed a crucial component of the paper, they arguably would be able to defend the remainder too, as in criterion 3.
It's not clear whether the statistician approved the final version, as in criterion 2. But it seems likely.
Obviously, the statistician should at least be acknowledged, since they fulfill some criteria. But all things considered, the statistician has a pretty good case for being included as co-author as well, in particular when comparing their contribution to that of the sociologists (which I have not done here).
Upvotes: 4 <issue_comment>username_3: I do not know what the standards are in sociology, but the contribution of the statistician looks quite substantial to me. Without even looking at standard guidelines as other responses do, from my gut impression that the contribution you describe is well above the threshold for co-authorship.
Leaving them out feels quite wrong, even mentioning it in an acknowledgement them feels like a downgrade. It is perfectly fine for not everyone to be expert in everything and for different people to providing complementary contributions, but leaving them out is just plain wrong.
There is a reason why interdisciplinary research usually has a hard time being justified - one often either drastically over- or underestimates the expertise contribution of the neighbouring fields, leading to a distorted view of the intellectual value of external contributions (in both directions). [yes, I know, statistics is well established in sociology, but the present question precisely reflects the classical
interdisciplinary dilemma]
Upvotes: 4 <issue_comment>username_4: At this point I will offer what I think is an authoritative answer to the question. I consulted with a senior sociologist who conducts quantitative studies. Here's what he told me: (1) It's common for sociologists to hire statisticians because they don't have and aren't expected to have advanced skills in statistics. (2) Normally the statistician only prepares tables and charts. The statistician does not write a long, publication-ready report. (3) If the latter occurs, the statistician would be a co-author. (4) What is more expectable is that the sociologist has the skill to read and interpret the statistician's work, and then the sociologist writes the narrative results and discussion sections. In that case the sociologist takes sole authorship and might credit the statistician in an acknowledgment. (5) Claiming sole authorship in the present scenario would be plagiarism and it would invite other serious risks, like being unable to discuss the study intelligently in a job interview. (6) In order to claim sole authorship in the present case, the sociologist would need to substantially rewrite the statistician's report, not publish it verbatim.
The senior sociologist questioned why this study was undertaken in the first place, since the junior sociologist has so little knowledge of statistics. There's a case for that, but I also think a junior sociologist could begin working with statisticians to increase skills and learn the ethics of authorship. Not having statistics skills is a handicap for a sociologist, since the answers to many pressing questions in sociology are contained in government statistics.
I'm satisfied with the senior sociologist's answer. It doesn't refer to any formal authorship codes, but it's a good answer from the field. I appreciate everyone's input on this. Thank you!
Upvotes: 3 <issue_comment>username_5: Just to add a search term to the discussion:
Not including someone who did make a substantial intellectual contribution to the study/paper is called [**ghost authorship**](https://en.wikipedia.org/wiki/Ghostwriter#Academic) and is as such unacceptable.
Academic ghostwriting is not just a matter of the rights of the ghostwriter (who in many jurisdictions can *legally* agree to their name being excluded from the author list), but
* it leaves the remaining authors misrepresenting their contributions as including those of the ghost author. This is plagiarism (even though the text is novel and original).
* it can hide possible conflicts of interest.
Upvotes: 1 <issue_comment>username_6: The BSA criteria for authorship seems like a solid starting point. Indeed if the statistician processed the data, wrote a substantial part of the work and presumably devised the analysis method themselves,it would be a case for acknowledgement or even authorship. But it depends:please take a look at @username_2 's criterion no. 2 : the statistician should have seen/ approved of the final form of the paper.It seems trivial but what if someone hires you, a specialist in your field, to do a certain analysis- and you do that , and presumably receive everything, including authorship and some form of remuneration but never get to see the final work? Maybe they used the information you provided in a context you feel it is not correct. Maybe they "load" you with some opinions and interpretations you disagree with, maybe they would misuse your work, or draw the wrong conclusions from a good analysis. And you could potentially be called out for something in "your" paper that you did not effectively sanction -because you never had the chance. So I make a case for "acknowledgement" not "authorship" if the (major) contributor did not have any..editing options, as in rejecting or requiring modifications to be made so as to reflect their own opinion and analysis.
Upvotes: 0
|
2021/07/06
| 3,111
| 13,264
|
<issue_start>username_0: My impression is that some US universities at one time required producing a translation of a non-English language article in order to receive a PhD. As far as I am aware, this practice started before World War II when English was not as dominant a language in science as it is today. The requirement seems to have been removed at some point after World War II, and at some universities not until after the Cold War ended.
Does anyone know about the history of this requirement? Why was it added? Why was it removed?
Not much can be found online about these sorts of requirements. [UC Berkeley](https://grad.berkeley.edu/program-proposals/state_supported_masters_phd_lang/) has a page on what sounds like a similar requirement, but only for some programs. And it sounds like the concern is more knowing a foreign language than necessarily producing a translation. Perhaps producing a translation was only one way to satisfy a (now removed) foreign language requirement.
I recall speaking with someone who got a PhD in math from NYU in the 90s. They were required to write a translation, though as I recall they described the process as more of a formality and did not believe it continued long beyond their time at NYU.
Also, I spoke with a European academic interested in translations who was not aware of similar requirements in Europe. If anyone knows of similar requirements in Europe and their history, I'm also interested in this.<issue_comment>username_1: Some personal evidence.
In the 1960s I had to show that I could read mathematics in at least one of French, German or (I think) Russian. That was a reasonable requirement at the time - much historical literature that was still relevant called for that kind of literacy.
The exam did not call for a formal translation. I just had to make reasonable sense of a page or two of mathematics - not highly specialized. Presumably if I could do that I could master the details of any paper I really needed to understand.
Upvotes: 6 <issue_comment>username_2: To comment on the "why": at a certain point a lot of work in many fields, including mathematics, was untranslated from the original into other languages. There was lots of critically important mathematics written in French and German, among other languages.
Russia did important work also, but a lot of it was hidden from much of the rest of the world due to the Cold War. Some things actually had to be independently replicated in Russia because of the Cold War. St. Petersburg (Petrograd, Leningrad) was one of the centers of deep math thinking. IIRC, the Simplex Method was discovered independently there.
The same was true of other fields with a bias toward those same languages - philosophy, for example. But Spanish or Portuguese, maybe Italian, might have been more critical for someone studying some aspects of Literature, though translations of non-technical work came a bit earlier than scientific work.
But translation is more ubiquitous now and there is a bias toward English publication that didn't exist before. If you are a scholar in Germany at the moment, you are likely fluent in English. Maybe more so that lots of native speakers.
Upvotes: 5 <issue_comment>username_3: Echoing other answers, with a few more details:
Yes, in my own experience and direct observation, it was typical at U.S. R1 universities, in mathematics, to require "reading knowledge" in two of French, German, or Russian. As others have said, this was because there simply were no English-language primary sources (nor translations) for lots of important things. I think it's fair to say that German and French were more important languages for mathematics (not counting Latin!) from 1770 until 1945 or so.
For example, in the mid 1970s, most of J.-P. Serre's high-level expositions from College de France and elsewhere were only available in French. Some of them started being translated into English in the late 1970s.
Much of <NAME>'s work was written in German, and was not translated. Indeed, in those days, so many math people could read German that there seemed to be no reason to translate it.
(In contrast, relatively few people in the U.S. and western Europe could read Russian, which surely helped motivate the Amer. Math. Soc.'s translation program for various Russian sources.)
In 1977-79, P. Deligne's contribution to the Corvallis Conference (held in Corvallis Oregon) (AMS Proc Symp Pure Math 33), was written up in French. At the time, this seemed a little idiosyncratic for an English-language conference, all whose other contributions were written up in English. But no one got excited about it, because essentially everyone in the audience (at the conference, and for the conference proceedings) could read/understand some French, at that time.
In recent years/decades, the utility of being able to read more than in-English mathematics has decreased, not only because more things are written in English, but, also, because translation software is often more efficient than a weak grasp of the languages (moderated by some sanity checking about technical things).
So, at my R1 U.S. univ, in math, we had reduced the language requirement from two to one some years ago, and will reduce it to $0$ in the immediate future.
Yes, this does cause some bottlenecks for my PhD students, because it's still not the case that everything is available in English. I am indeed quite glad that I can read math in French fairly easily, and in German with some effort. I regret not learning a little Russian when I was younger, too, but it was rarely available in the U.S. in those times.
Upvotes: 4 <issue_comment>username_4: A few years ago I pushed rather hard, with ultimate success, to end one of these language requirements at an R1 university math dept as a grad student representative. More than anecdotes would make for a better answer, but I've never found an authoritative synthesis on the history of the requirement or even a careful summary of its current status. Perhaps someone will find what I managed to gather useful.
My recollections are from 2018, exclusively for Math PhD programs. They're nowhere near authoritative, but I gave soliciting information from a wide range of people the old college try at the time.
**Why was it added?**
I've never seen a historical source for the original motives, but talking around both locally and more broadly at conferences, I never heard any deviation from what other answers report: Translation from other languages proved academically useful in the past because a significant proportion of relevant material was not written in English.
**What is the status of this requirement?**
The status was all over the place in 2018. For some hard names, here's a list of schools I compiled at the time for political purposes, though perhaps they've changed now.
Without:
* University of Illinois Urbana-Champaign
* Georgia Tech
* UC Santa Barbara
* UC Berkeley
* Florida State University
* Duke University
With:
* UNC Chapel Hill
* University of Miami
* University of Virginia
* Harvard
* Princeton
What I gathered from local observations was that the requirement has been on a slow rather than abrupt decline. Older academics occasionally reported specific examinations were more extensive than modern ones, e.g. requiring two languages rather than one. The history of the specifics differs on a per-department basis, of course.
The requirement has certainly not been entirely phased out everywhere at time of writing. For instance, despite my success at ending the requirement in my Dept in 2018, that was only for future PhD classes- I and my peers still had to take language exams. Students with language requirements are still on the books.
**Why was it removed?**
Writ large, the argument that worked was the obvious: The proportion of relevant material not written in English has declined to the present day.
Some secondary points that arose:
* Requirements incur a cost, namely paperwork and occasional graduation delays.
* The specific languages (usually French, German, and Russian in math) were vestigial remnants of the original intent. While there was some suggestion that we could debate a change to the list, ultimately this would have induced more work.
**Why has it lasted so long?**
On the one hand, what I gathered from my investigations wasn't revelatory in this respect. Bureaucracy moves slowly. Democratic mechanisms like faculty votes demand energy to change the status quo, whatever it is.
An interesting secondary point I gathered. Whether it generalizes to other Universities, I can't know for certain, but I have the impression that most operate in a structurally similar way. In Departments where the work to maintain requirements mainly gets placed with short-term faculty directors (of graduate studies, etc.), almost nobody has an immediate incentive to remove outdated degree requirements. The most burdened voter, the director in question, will probably have to deal with the removed req for another 5+ years. That's longer than most graduate directors' terms.
Upvotes: 5 <issue_comment>username_5: You asked for a contemporary example in Europe: Russia still requires PhD theses to be translated into Russian in order to get foreign PhDs officially recognized in Russia.
Apart from that, there is a list of foreign universities whose PhDs get automatically recognized in Russia without translation. There is also list of Russian universities that can recognize foreign PhDs on their own, without approval of the central authorities (VAK). However, neither list is particularly long, and not all universities that have the right to recognize foreign PhDs on their own also make use of this right.
Upvotes: 2 <issue_comment>username_6: I did my Ph.D. in ~2000 at a math department (in the US) that required demonstrating reading proficiency in another language, as evidenced by *being able* to translate a mathemetical article in your field or a closely related one, to one of the professors in the department who was professionally comfortable in that language.
There were those who preferred to do so by writing out a translation of their chosen article. Others preferred to show up with the article and *viva voce* translate at the random spot chosen by the faculty member, say 1/2 page worth.
Our general departmental culture was very collegial, so if you did at all a creditable attempt, you passed. Just like it was hard to actually fail our comprehensives. There were ongoing friendly arguments whether the oral or written approach was "better", which depending on the disputant could mean "easier" or "more creditably demonstrating sufficient mastery".
The arguments in favour of "written", which seems to be the context of the question, were from both angles:
1. An acceptable written translation better demonstrates you sufficiently understand nuance, and provides documentary proof thereof (can be filed away).
2. (From the other side) Written can actually be easier, since you can look up in a dictionary, take your time, redo part, as opposed to "being on the spot" orally.
Upvotes: 2 <issue_comment>username_7: An anecdote from a relatively recent experience:
I completed my PhD (pure mathematics) in 2017 at a Big 10 university in the Midwestern US. At that time I was required to complete a language exam in either French, German, or Russian, none of which I knew anything about. I decided to go with French since I do know some very basic Spanish, and the syntax is fairly similar.
The graduate chair told me I was to find a 10-page excerpt of an appropriate mathematical paper or textbook and take it to a specified French-speaking mathematician in our department for approval. I selected a chapter from a text on the differential geometry of surfaces, and it was approved.
The French-speaking mathematician then told me to take a few days and do my best to create an English translation of the excerpt. I was permitted to use whatever resources I wanted, even translation software. I did it mostly with Google translate and the assistance of a standard English-to-French dictionary.
I took the original excerpt and my translation back to him, and we reviewed them together. He asked me a handful of questions about my translation, and also asked me to read specific passages from the French original and describe what they said without referencing my translation. The whole meeting was very informal and took less than 10 minutes.
The point was basically to demonstrate that you have the ability to understand mathematical work not written in your own language, and I did that, so I passed.
This requirement was mildly annoying but not very burdensome, so students in our department have generally not pushed back much against it. Also, I believe (but I'm not completely sure) that students who speak English as a second language are exempt.
A final comment: I know of a pair of PhD students at another Midwest Big 10 university who petitioned their department to allow them to take their language tests in Italian. They were studying PDEs, and apparently much of the relevant literature to their work was written in Italian, rather than French, German, or Russian.
Upvotes: 2
|
2021/07/06
| 1,222
| 5,480
|
<issue_start>username_0: I am a European master student and recently accepted a PhD position (Computer Science/Machine Learning) in a German speaking country. I did multiple research internships in Europe during my master's, and I would argue, they define me more than my actual degree. To "onboard" with my university, the HR wants to know if I have relevant work experience (to determine my salary) and to back them up with e.g. what in German is called an "Arbeitszeugnis" and translates to a recommendation letter or reference by the employer. It was then that I realized that except for my signed contracts, I do not have any certificates or "open" reference letters from my research internships and also that in more non-academic or not so highly competitive settings, reference letters could be asked to be sent with the application by the applicant. Therefore, if for any reason I would drop out of my PhD, I think having an unaddressed reference letter by hand could come in favourable.
All my internship supervisors agreed to act as references for me and I have listed them in my reference sheet. I have no problem asking them to quickly write an official document certifying that I worked as an intern within their group, which will be enough for my university. Asking for an unaddressed recommendation letter, however, conflicts with my strong perception that in the academic world showing / giving recommendation letters to the applicant themselves is regarded as highly non-professional (the reference may not be honest!). At least, this is what I gathered from the attitudes of the professors in the research institutions I worked in and hence, did not at all think of the possibility and necessity of ever asking for such a letter.
Therefore, my question is, do you think it will hurt my reputation if I would approach my supervising professors and ask if they could send me an unaddressed letter of recommendation for later usage? Or, do you think it won't be necessary to have such letters of recommendation if I acquire a certificate of my employment, because I can always refer to the contact details of the professors in my reference sheet. As I am the first in my family to obtain university level education, I am unsure if I maybe misinterpret the "business world" (particularly in Europe, but also the US) and employers at the master or phd level will never ask for direct letters of recommendation?
One could argue that it is premature to ask for such "open" reference letters and I may only ask if I really need them. However, I am talking about research internships one and three years ago and hence, think I should ask rather now than later, before all my work will be "forgotten".
I would very much welcome any kind of feedback / opinions from any knowledgeable person :)<issue_comment>username_1: I would have no problem providing a generic open letter of recommendation. Other academics might. If you ask respectfully a "no" answer should not have any negative impact on future requests for particular letters.
I regularly share letters of recommendation for students with the students. They have usually waived their right to see them, but I have not waived mine to show them (unasked).
Upvotes: 4 [selected_answer]<issue_comment>username_2: An alternative to an open letter, that has its own advantages is as follows.
First, tell people that the time will come in the future that you will want/need their support in the form of letters. It is best if you do this when actually requesting a letter so that they have an opportunity to archive it for future reference.
More important, however, is to keep in contact with such people over time, even to the extent of collaborating if the opportunity presents itself. A circle of contacts (especially collaborators) is one of the most valuable assets of a researcher. It is a place to share ideas and get feedback. It is a way to get invited in to valuable discussions. And then, you don't need to worry about finding people to support you.
An unaddressed, but dated letter is only valid up to the point it was written. If time has passed then is says little or nothing about your current abilities.
Upvotes: 2 <issue_comment>username_3: I think you are complicating the situation by conflating two things.
1. You would like to be able to provide evidence to support the entries in your CV.
2. You want references attesting to your capability.
For the first, I think it would be reasonable to ask your boss/supervisor to provide a 'To whom it may concern' letter on headed notepaper recording the facts ("TheOtherStudent was engaged as an intern within the ABC team of XYZ corporation from ... to ... "). I am not sure whether anyone will ever care much about seeing this, but I don't see any problem with requesting or providing it.
For the second: in reality, references need to be recent and verifiable. As such, I don't think employers will be that interested in the sort of letter that you envisage. If you need a reference in the next couple of years, there will almost certainly be *someone* at the company that remembers you, and can help you contact your original supervisor. If so much time has passed that everyone has moved on, you probably need a more recent reference anyway.
One thing you could do is 'connect' with your intership colleagues on a professional social networking site such as LinkedIn. This might be useful if you need to track them down in future.
Upvotes: 2
|
2021/07/06
| 4,354
| 17,760
|
<issue_start>username_0: *Note: This question is intended to be a generic version of something that gets asked occasionally on this site. I apologize for making it overly-contrived, but it's intended to be a generic template of this style of question. A previous version of this question was locked due to Personally Identifiable Information ([PII](https://en.wikipedia.org/wiki/Personal_data)), so I'm re-posting the question anew.*
I was recently reading about [topic] and I kept finding articles by [crackpot], such as [link] [link], and [link]. How the heck can anyone trust this person? They've been disproven as a crank by [notable person] and [notable news source], as well as people on our own website: [link to some other stackexchange site]. People tend to say really nasty things about them:
>
> [crank] is an idiot who only spouts [expletive]. I wouldn't ride the bus with them, much less be a collaborator on academic work.
>
>
>
How on Earth are trusted academic sources such as [high h-index journal] publishing work from this [amusing yet insulting word]?<issue_comment>username_1: Among other things, based on observation, sometimes an element of the crank-ery is an exorbitant self-confidence and enthusiasm for self-promotion... even, or perhaps *especially*, in the face of negative professional feedback.
I can easily visualize personally timid expert people getting worn down by the importunings of a crank they misguidedly tried to help.
I have a little more difficulty visualizing serious journals getting similarly worn down, but it's conceivable to me that it could happen through various event-sequences. "Being pushy" does have its rewards...?!?
I think the relevant dynamic here is that "the crank" "has nothing to lose", and subliminally realizes this, and thus has a much different context in which they operate. The established professionals and journals do "have something to lose", but apart from inhibiting endorsing/publishing doubtful things, there is also a strong social/moral disincentive to be toooooo negative to enthusiastic amateurs, etc. This dynamic is used by con artists more generally, I gather. Playing on the tendency of people to "be nice".
Upvotes: 1 <issue_comment>username_2: I'll note that a "crank" in one field might well be an expert in another. Some incredible racists who spout crap are well respected researchers in, say, math or electronics.
I'll also note that someone who has done important work early on can become a crank later in life. The opposite is also true. Road to Damascus, and all that.
Sometimes a perfectly well respected and published researcher will run in to some idée fixe that they fail to shake off with all of their attempts to follow through rejected by their peers.
I'll also note that the incompetence of reviewers at good journals is probably the least likely explanation. And pressure on reviewers, likewise, is hard to manage with blind reviewing.
But, was [Poincaré](https://en.wikipedia.org/wiki/Henri_Poincar%C3%A9#Poincar%C3%A9_and_Einstein) a crank for not recognizing Einstein's work? Or Einstein for turning aether theory on its head?
---
Note that neither the question nor this answer address the issue of how *crackpot articles* get published. That requires a completely different analysis.
Upvotes: 6 <issue_comment>username_3: Why would it be impossible? Remember, peer review is only supposed to judge the work that is presented. It's not supposed to judge the reputation of the authors, or the credibility of the author's work that wasn't submitted.
Take any of the papers you linked, then cover up the names of the authors and show it to people in those fields. Will they disapprove of the work? If they only start disapproving when they know who the author is, then we have exactly something which double blind peer review is supposed to stop.
The idea that [crank] is a crackpot and therefore *everything* they've ever written is untrue falls afoul of *ad hominen*. Beware.
Upvotes: 6 <issue_comment>username_4: One possibility is that the crackpot is a close friend of an editor in chief of a peer-reviewed journal. The editor in chief unethically changes the peer-review process so that the paper is accepted even if it is not worthy of publication.
Upvotes: 3 <issue_comment>username_5: Just as an added note: if not all research of said crackpot is bad (and as others have said, it most often isn't), if the review process is double blind, then the only person who could reject the paper simply because they knew the person was a crackpot is the initial editor who accepted the paper for the peer review process. And I really don't know if editors make it a habit to look up every single author of received papers (I guess they rather don't).
Upvotes: 4 <issue_comment>username_6: The original posting had links to articles with multiple authors. Where there are multiple authors, it is possible that the articles are fine and that most of the authors are fine, but a person somehow insinuated his way into being listed as a co-author but in fact did little or no work on the article. This could happen in a variety of ways from just being friends with the authors, being persistent or doing some favor. I assume that journals accept the list of co-authors submitted, and do not investigate each to see if they actually made -- or could have made -- any substantive contribution.
Upvotes: 3 <issue_comment>username_7: First, I think it is appropriate to describe peer review by paraphrasing Churchill's famous quote on democracy: It is the worst form of publishing scientific results, except for all the alternatives. Peer review has a high failure rate. I've seen results that can be easily demonstrated to be fundamentally flawed, containing elementary errors in math or reasoning, appear in top-rated journals. I fought and sometimes lost battles with referees whose criticism was obviously nonsensical, but they stuck to it, and eventually I just took the coward's way out and submitted the paper to another journal. Still, when you look at what gets published when there is no effective peer review, clearly peer review helps a great deal, its flaws notwithstanding.
Second, there is a continuous spectrum between mainstream science and utter crackpottery. Revolutionary ideas sometimes appear at first in a form that makes you wonder what the author was smoking. Consider this quote from <NAME>, for instance, "Three quarks for Muster Mark! /
Sure he has not got much of a bark / And sure any he has it's all beside the mark." It inspired <NAME> to use the word "quark" in his proposed mechanism behind the "eightfold way" involving a new substructure for baryons and mesons, with elementary particles carrying fractional charges. A miracle this paper even got published. Yet it now forms the foundations of the SU(3) part of the Standard Model of particle physics, one of the crowning achievements of modern physics overall.
Finally, it is important to note that no crank or crackpot thinks he is a crank or a crackpot. I am regularly approached by strangers offering their, ahem, unconventional ideas on physics. Many have respectable scientific credentials. They have strong faith in the validity of their concepts and often go to great lengths to explain them in detail, "prove" them, provide background, even propose experimental verification. Most of them are genuinely good, well-meaning people, devoted to what they do, who spent a huge amount of time developing their ideas. This, of course, makes it even harder for them to accept the possibility that they were wrong all along, that what they saw as profound insight was just a symptom of their profound ignorance.
Which sometimes makes me wonder: How do I know that I am not one of them? Indeed, how do you know that you are a crackpot? The answer is, you don't. It's like that German Autobahn joke I heard eons ago, about a driver who listens to his car radio warning travelers that a demented driver is going against traffic on the wrong side of the expressway. "One driver?" he asks rhetorically, pointing through his windshield, "There are hundreds of them!"
Upvotes: 4 <issue_comment>username_8: [<NAME>](https://en.wikipedia.org/wiki/Jacques_Benveniste) published in Nature an article about (broadly speaking) the "memory of water" in *Nature*. *Nature* added a warning about *"the incredulity of the many referees"*, but still went for the publication.
Why? Because Benaventiste was a Famous Scientist That Can Be Trusted (TM).
For anyone else, the article would have been rejected with a roar of laughter, but here rationality went on vacation and Nature wanted to be the first one to publish breakthroughs.
**So to answer your question: this happens sometimes because of money and connections.**
A special mention to <NAME>, whom I admired and was proud of his Nobel Prize, until he went nuts about homeopathy and vaccines (including his defense of Benaventiste's biased results)
Upvotes: 3 <issue_comment>username_9: There is of course the alternative possibility that the person being pilloried by
>
> [notable person] and [notable news source], as well as people on our own website:
>
>
>
may simply be correct. It's not *a priori* reasonable that those authority sources deserve more faith than the published researcher on the topic in question.
>
> People tend to say really nasty things about them
>
>
>
And this is sadly often true about those going against the majority view, the existing power structures, groupthink or vested interests (classic examples: <NAME>, Galilleo, Darwin).
One can receive opprobrium and still be be correct. If the anonymous work has been peer reviewed and stands up on its own merits, then that may in fact be evidence that the criticism received by the
>
> [amusing yet insulting word]
>
>
>
may not be entirely justified. Suppression of dissenting ideas has often slowed new discoveries and scientific progress, so we should be wary of contributing to it in our own modern way.
Upvotes: 3 <issue_comment>username_10: Others have made valid points here, but there is one that seems to be missing. Or, at the least, talked around. And that is the logical fallacy you are engaging in here.
One should never, ever, ever judge a factual, or logical argument based on the person making it. Merit alone is all that matters in the world of knowledge. That is known as the Ad Hominem logical fallacy. All other reasons above are valid. But to those I'd like to add that even an idiot can be correct. Hell, you've heard the saying that a broken watch is correct twice a day.
To put it another way, if a mentally handicapped person (once regarded retarded) were to tell you the sky was blue, would suddenly doubt a lifetime's experience? If a known idiot told you that 2 + 2 equaled four would you assume it equaled 6?
No, of course you wouldn't. Because all that matters are the facts.
And this goes the other way as well. Just as it is a logical fallacy to assume anything a crackpot says is wrong, so too is it a fallacy to assume anything a respected researcher says is correct.
We see this in politics all the time. And it goes beyond simply ignoring anything a Democrat or Republican says simply because they are a Democrat or Republican. It goes beyond ignoring Fox news or MSNBC. Both sides love to hurl names at each other, negatively charged descriptors, very much along the lines of the one you employed: crackpot. But to that they add Nazi, racist, rapist, heartless, supporter of big business etc. etc. etc. All that should matter are the details, ironically the very thing many politicians would die before giving you.
Nor can we assume that, just because someone has a high degree they will be correct. The tobacco companies proved that one conclusively. Sadly, I see this form of the Ad Hominem fallacy the most. The entire world seems to suffer from [Wizard of Oz syndrome](https://www.youtube.com/watch?v=2pWSwfVDiq8&ab_channel=MihaiTufa), acting as if it is the attaining of a piece of paper that somehow imparts knowledge. In truth, the paper is meaningless to any discussion. It is the points presented, and only the points presented that should matter here.
We want the facts, and nothing but the facts, so help us god.
Upvotes: -1 <issue_comment>username_11: >
> They've been disproven as a crank by [notable person] and [notable
> news source].
>
>
>
Because some basic logical fallacies are assumed in such a subjective question, most any answer would have to first address that. They are:
* Argument from authority
* Fallacy of incomplete evidence
* Ad hominem
Addressing evidence for or against the **specific claim** rather than spending time on reputation of the claimant is the only way to determine the claim's veracity.
Upvotes: 3 <issue_comment>username_12: There was this one guy called <NAME> who was considered a crackpot, and a dangerous one, by all the experts at the time he was alive.
He was later proven to be correct, but there are still people believing the erstwhile experts to this day.
He eventually got published, despite being considered a crackpot...
Upvotes: -1 <issue_comment>username_13: One of the reasons why "crackpots" can get papers published is because a lot of journals allow authors to suggest potential reviewers. Journals should *never* do this - the editors should be sufficiently familiar with the topic to be able to select appropriate reviewers for themselves (authors should be able to suggest people that *shouldn't* be selected as reviewers). I feel extremely uncomfortable every time I am asked to suggest reviewers for my papers, but happily this is fairly uncommon in my field (machine learning).
I (or at least my fictional alter ego, <NAME>) was involved in a case where this led to a [comment paper](https://retractionwatch.com/2018/04/23/flawed-climate-science-paper-exposed-potential-weaknesses-in-peer-review-process/) by the journal editors explaining the failure of peer review and the change in the journal's policy (it wasn't the first time this had happened).
Another reason is the lack for academic reward for publishing comments papers, which should be an important element of post-publication peer review or quality control. If people knew their "crackpot" paper would be likely to attract a critical comment, there would be more of an incentive not to do it. I've written a few comments papers and they are a *lot* of work.
Journals don't seem to do a great deal of checking to see if arguments have been made and refuted before. One of the comments papers was on a study that argued a statistic technique used in a variety of areas in biology was wrong. But of course, it wasn't, the authors just didn't understand it correctly. So I wrote a comment paper, but I have found the authors have published a large number of papers making similar claims in a variety of applications of this technique. Nobody has the energy to refute all of them.
The last reason is that there are a *lot* of journals these days, so if you get a paper rejected, they can easily be sent somewhere else, and if you test the lottery of peer-review often enough, it will eventually fail. Good authors adapt their papers according to the reviewers suggestions, bad authors just send it off somewhere else with minimal changes.
So there are a couple of practical reasons why it happens.
Some have mentioned Galileo and Darwin and Einstein. It is important to consider though that Galileos, Darwins and Einsteins are vanishingly rare, but crackpots are near ubiquitous. So if you think you are a Galileo the odds really are not in your favour and it is self-skepticism that stand between you and "going emeritus".
Upvotes: 2 <issue_comment>username_14: Something that doesn't seem to have been mentioned here is that the paper could just have been poorly written, and have enough buzz words from complicated enough fields to intimidate the reviewer. And possibly the reviewers might be junior enough that they aren't confident in their reviewing skills yet or senior enough that they only skim through the papers and fill in gaps with something reasonable rather than what is in the paper.
The first paper that I happened to review was one of these papers, on a complicated field and poorly written so it was hard to assess what the authors had done or were saying. Reading through the appendix made me notice they had defined variable A based on variable B and variable B on variable A in their algorithm. I checked a previously published paper of theirs that they referenced and found the same thing so rejected the paper on writing and this inconsistency (or I think I gave the benefit of the doubt and said resubmit with major corrections). I suspect the first paper was published even with this issue due to it being difficult and intimidating to read the original paper just as it was the one I was reviewing.
But I could easily see the temptation to just let it go through if you feel you couldn't understand the paper and felt you'd bitten off a bit more than you could chew. Alternatively you have the good reviewers decide it isn't worth the time and say they can't review it after accepting (because they know it isn't worth the time) and then if it happens to go to those who are more likely to just rubber stamp due to being intimidated and inexperienced, meaning these kinds of papers could slip through. This is part of the reason I consider it completely valid to go after the clarity of the work when reviewing work, and therefore try to make my own work clear and will make edits if a reviewer misunderstands work I'm a part of.
Upvotes: 2
|
2021/07/06
| 694
| 3,065
|
<issue_start>username_0: A student working on a master thesis in mathematics will have to present it at the end of the semester in front of the professor.
The professor will ask several questions during the presentation, what is the intend of those questions?
Basically, what is the professor trying to achieve through those questions?
Does he want to see whether you are the writer of this thesis and that you didn't have someone else do it for you?
Does he want to know whether you understand the concepts you are presenting? ( I am guessing yes, since this is the point of a master thesis after all)
Does he want to know whether you can use the information in your thesis to come up with other conclusions, not mentioned in the paper?
Perhaps give you some short mathematics exercises he expects you to solve during the presentation.
Can you explain what is the point of those questions?<issue_comment>username_1: Often there will be a committee, and not just one professor asking questions. In some institutions, any faculty member can attend and ask questions.
The principal reason is to be sure you do understand the concepts you have presented, which has the added effect of reassuring them that you actually did write the thesis. They may also probe for broader, more general understanding of the subject matter of your degree.
Your thesis advisor or committee chair can help you prepare for the oral examination part of your presentation by giving you advice on how to prepare. It may be possible for you to attend the thesis defenses of other students to see how they are conducted. If so, I urge you to do so.
Upvotes: 2 <issue_comment>username_2: I can't know what the practice is at your particular university, but that kind of public presentation is common. Other faculty and students may be invited, and may come. The forum gives the student a chance to explain to the audience what they wrote about, what they learned, and why it's important and interesting. Unless there is some doubt about the correctness of the thesis or some doubt about the references the questions are likely to be open ended and straightforward.
Upvotes: 1 <issue_comment>username_3: After some decades of naively thinking that the point of an exam was the exam itself ... I've finally realized that the largest point is to give students a motivation to study/review/think.
So the point is that the student knows, in advance, that questions will be asked. And that not being able to respond will be, at least, an embarrassment. So the student makes sure to improve their ability to answer relevant questions. *That* is the real goal. :)
And, for that matter, one's advisor should not allow the exam to take place until they are confident that the student is adequately prepared. There's scant purpose in carrying out an exam wherein a bad outcome is predictable (except, conceivably, in extreme cases where the student is really not working hard, is disinterested, uncooperative, yet somehow doesn't realize they should leave the program voluntarily...)
Upvotes: 3
|
2021/07/06
| 709
| 2,994
|
<issue_start>username_0: I'm from Brazil and I really, really want to be able to pursue my Ph.D. in Immunology at Yale University.
I have a Bachelor's degree in Medicine here in Brazil, and I'm currently taking my Master's degree in the field of Mucosal Immunology.
However, whereas I do think my Curriculum is competitive, I am not sure if it would be enough to get me into Yale, to be honest. Besides, there's this Japanese Government program where Brazilian students are offered opportunities to do a Master's degree in Japan. It's a really great opportunity and I think I can easily be accepted into this program.
So, as Japan is a developed country with a lot more resources for scientific research, I was thinking that maybe taking another master's degree through this program could actually help me strengthen my curriculum to my Ph.D. application at Yale. Another Master's could help me to get more publications and learn more techniques.
However, at the same time, I'm really worried that this could be a red flag during my Ph.D. application because it's really weird to see someone with two Masters in the exact same field. So, would another master's degree in the same field in this particular case of mine be considered a red flag?<issue_comment>username_1: If you want to get a doctorate (especially in the US) then apply to doctoral programs. Apply to several, not just a few, and make sure that they aren't so similar that being rejected by one is highly correlated with being rejected by all. Cast a wide net.
If you are "well enough" prepared to enter a program then you are fine. You will have an opportunity to fill in a few things you might have missed.
But getting another non-doctoral degree is probably a waste of your time and effort.
I suspect that your chances of getting in to Yale, specifically, are near zero. Not because of you or your background, but because the competition is extreme for a relatively small number of available positions. Perhaps lightning will strike, but a wider net gives you more options. And the "name" of your university is less important to your career overall than what you do yourself. And, believe it or not, there are excellent professors at places other than Yale.
Upvotes: 3 <issue_comment>username_2: ### Maybe, if one is practical and the other is research-based.
So, my understanding is that in medicine and law, the postgraduate degrees that lead into the practice of the field (e.g. Medical Doctorate) are roughly equivalent to an academic Master's Degree that focuses more on research than on the practical application. As a result, getting an M.D. and a Master's Degree may not be completely redundant, and if your first degree was the M.D., then getting the more research-focused Master's Degree *might* make you more suitable for a PhD.
Of course, I don't work for Yale, and I don't have any insight into their admissions process in particular- if you want an answer about that, you'd need to ask them.
Upvotes: 0
|
2021/07/06
| 1,081
| 4,672
|
<issue_start>username_0: I am a very beginner and I want to publish my first research in theoretical physics. I do not yet have the terminology of experts in this field. Is it possible to accept a research in physics without high-accuracy scientific terms and experience?<issue_comment>username_1: It is possible, but very unlikely that such a paper would be published. It would be likely to confuse experts if you use terms that aren't standard, though good explanations would help.
More seriously though, is that your lack of knowledge is likely to lead you to write things that experts consider trivial and that you might wind up making erroneous statements.
But, one way to proceed is to find some local source, who is knowledgeable, and ask their advice on what you write.
Your amateur standing is much less important than the accuracy, novelty, and importance of what you write. Occasionally an amateur will have some fresh insight into an important problem, but it is very rare.
Upvotes: 4 <issue_comment>username_2: It's very improbable. If you don't know the terminology, odds are you don't know what you're talking about and cannot contribute meaningfully to the field, especially if the field is theoretical physics (this field is complex enough that many physics graduates cannot contribute meaningfully to it).
If I received a paper discussing, e.g., dark matter and it's not clear the author knows what dark matter is, I would probably desk reject.
Upvotes: 4 <issue_comment>username_3: Publishing a solo paper is a big ask for someone at the level of "very beginner." Papers take a long time (often at least a month) for experienced people to write well because there are so many layers of details -- ranging from making sure every statement is technically accurate, to making sure there are no ambiguities or ways a sentence can be read in an unintended way, to conventions about hyphenation -- that all have to be checked and double checked.
If you are not sure you have a grasp of the language, my friendly suggestion would be that you are not ready to undertake this. There are other activities you can do. If you have some peer group of people who are at a similar level (if not you should try to form one) you can
* start a journal club where you read papers and explain them to each other
* share your draft with some critical and smart readers and address their comments
* rope one in to collaborate on your paper, have them check calculations and suggest their own ideas
* give a presentation about your work and see if you can explain each step
Even if not, there are many opportunities for self-study
* over the past year there has been an explosion of online conferences with talks available online, find and watch talks relevant for your area
* watch online videos of courses in your field
* take a course in technical writing
* read famous papers in your field carefully
* keep tabs on the arxiv for your field
* prepare slides *as if* you were going to present your work and give them out loud (I have found talking through every step out loud, even to yourself, to be a surprisingly effective way of finding gaps)
* [be skeptical of your own work](https://terrytao.wordpress.com/career-advice/be-sceptical-of-your-own-work/)
Upvotes: 3 <issue_comment>username_4: No, it’s not possible. No one wants to read a research paper with vague and imprecise explanations that don’t make use of the correct terminology of the topic you’re writing about. We have a place to read such things — it’s called “everywhere that’s not the scientific literature”.
Research papers are exactly where researchers go to read precise, carefully thought out ideas that advance their understanding and the state of knowledge of their field. In the research literature, especially in math and theoretical physics, precision is the name of the game. If your work does not have the quality of being precise at the standards of the area you’re working on, it’s not valid work, and you shouldn’t be trying to publish it. Instead, you should work to bring the precision up to the expected standard. Once you get it to that level, that’s when it’s appropriate to publish.
Upvotes: 2 <issue_comment>username_5: I agree with all other answers. Peer-reviewed scientific journals are not the place for imprecise terminology.
If you really think your work contains good research, perhaps you could look at depositing the research in an rxiv pre-print server. These are not formally peer reviewed, but are part of the scientific record. It might help you get some valuable feedback from the scientific community which you can use to help a future submission.
Upvotes: 0
|
2021/07/07
| 611
| 2,634
|
<issue_start>username_0: I am in a masters program (mathematics) and I have written a paper with my supervisor. My supervisor taught me the whole **general** procedure of finding papers to start with, methods of research and writing papers etc. I have learned so much from him but what I have learned has all been about how to do research. In contrast, the mathematics content of the paper is not within his area of expertise, and he has done almost nothing! Now he wants me to put his name as a the first author on our joint paper.
1- If I reject his demand, I will get a low grade for my thesis because he is a jerk, and thus my chance of being accepted for a good university will be reduced dramatically. I live a country where no one can object to their teacher's grading!
2- If I comply with his demand I will be a second author, and again, my chances of being accepted at a good university will be reduced dramatically.
Added after @Allure's comment: There are similar questions asked on this site but they discuss such things as whether the advisor's demands are right or not, and what actions should be taken. As I stated, I live in country where there can't be a powerful objection against a bullying professor, so my question "What should I do?" still remains because of my special situation.<issue_comment>username_1: What kind of graduate program are you trying to get into? In pure mathematics and many parts of applied mathematics and theoretical computer science, no one cares about author order, and the convention is that authors are listed alphabetically. So, I’m not entirely sure, but I think you may have some misconceptions about the reasons for your advisor’s request to be listed first, and/or the effects that your being listed second would have on your academic career.
Upvotes: 4 <issue_comment>username_2: There could be many reasons as to why your advisor wants to be listed as the first author, i.e. for reasons of funding. There may also be certain conventions to consider here, e.g. that a prof. is expected to be listed as the main author, or that alphabetical order is used generally. The bottom line of what I am trying to say is that there may be very good reasons for him to want to be listed as the first author that you don't know about or haven't even considered yet.
If it really is such a big problem for you, you should in any case talk openly about it with your supervisor so that he can give you his reasoning. If it would be common in your field, you could also try to negotiate a shared authorship with him. If you tell him your reasons, he might be open to that as well.
Upvotes: 1
|
2021/07/07
| 527
| 2,277
|
<issue_start>username_0: I am looking for the best place to publish a paper that I am preparing. It will be my first, and there aren't many clear and concise sources of information as to who is to be sought for the research I have been doing.
My question is, what are factors to consider in the choosing of a journal when publishing for the first time?
Thank you in advance!<issue_comment>username_1: It is really important to submit your research paper to a good quality journal. My suggestion is, submit your paper to DOAJ, WOS and Scopus indexed journal. Your paper may or may not get accepted in the first submission. But I am sure you will get a revision or feedback that will help you in develop your way of doing research.
Just make sure to submit your paper understanding the scope of the journal. If you submit to a journal that is out of scope. Without any feedback, the paper will get rejected.
There are numerous pay and publish, fast track journals, stay away from that.
Upvotes: 1 <issue_comment>username_2: Which journals do you read? Where do the references you cite come from? Where do your colleagues publish? This is very very subject-specific knowledge, but in any field there are a few key journals, varying in their specialised focus and in their reputation, and the experts know which they are because they use them.
Upvotes: 3 [selected_answer]<issue_comment>username_3: The answer does not differ whether you are publishing for the first time or the hundredth time. You should publish in the journal most suited to your research. That is, don’t choose a “worse” journal simply because it is your first time.
If you are having trouble thinking of which journal is most suitable because it is your first time, here are a few suggestions:
* Which journals do you read from?
* Which journals do you cite?
* Which journals do faculty in your field tend to publish in?
* Read any guidelines for publishing on the journal’s website. Think about factors like how long your paper is compared to others published in that journal, for example.
* Ask your advisor (presumably in your field) for advice.
* Have a few choices in mind in case your paper is not suitable/gets rejected at your top choice, **but only submit to one journal at a time.**
Upvotes: 1
|
2021/07/07
| 1,290
| 5,462
|
<issue_start>username_0: Background: I’m a rising (undergraduate) junior transferring to a different school starting with this upcoming semester. At my previous school I was an English/Philosophy double major, but I took way more philosophy courses than English, so that at my new school if I want to complete my double major I’ll have to take about 37 credits of English courses, most of which are major requirements + 5 or 4 gen eds + three or so elective philosophy courses.
My preference would be to take more than three philosophy courses, especially as I’m still deciding whether I want to pursue graduate school in English or Philosophy, and if I go with the latter I’ll need letters of recommendation (I didn’t form many close relationships with professors at my previous school, so this is basically my only option). I’ve found that if I switch to a Philosophy major and English minor, I’ll have way, way more freedom with regard to credits. I'd essentially get to take whatever I want in the next two years, so I could split my attention evenly between Philosophy and English.
Here’s my problem: I really would like to leave myself the option of pursuing grad school in English, and I don’t know if I can do that with just an English minor. As far as I can tell, it varies by program. I know that some allow it, but I don’t know how common it is. Right now, what I'm most concerned about is whether I could still make a strong applicant to the top English MA/PhD programs.
Would my best course of action be to simply contact each individual program and ask about their requirements for applicants?
P.S. Any outside resources on this subject would be appreciated. I can't locate much information about this online.<issue_comment>username_1: >
> I really would like to leave myself the option of pursuing grad school in English, and I don’t know if I can do that with just an English minor. As far as I can tell, it varies by program.
>
>
>
I would recommend choosing potential graduate schools that you want to apply to in advance so that you can make the optimal decisions right now. If schools that you want to attend do not have a hard requirement, then no problem. If schools that you want to attend do have a hard requirement, then you need to make certain decisions while you are still in your undergrad.
>
> Would my best course of action be to simply contact each individual program and ask about their requirements for applicants?
>
>
>
If schools that you most want to attend have a hard requirement (listed on their website, for instance), then it might be good to ask about your situation. But for schools that do allow it, this does not seem like you would learn any new information.
>
> Right now, what I'm most concerned about is whether I could still make a strong applicant to the top English MA/PhD programs.
>
>
>
There are two major things to consider here:
* Do **you** believe that you are a strong candidate? If not, consider what steps you need to take in your undergraduate education to prepare yourself for grad school.
* Assuming that your answer to the first question is affirmative, do you think your **application** would convey that you are a strong candidate?
Many graduate school applications in the humanities require some form of the following:
* GRE test scores
* Letters of recommendation
* Past transcripts
* Statement of purpose
* Writing sample
Check what different programs require for admission, and tailor your application accordingly. If your philosophy training has given you adequate skills to write well, your writing sample and GRE scores should demonstrate that. Past transcripts should show a high number of English courses taken. Letters of recommendation from faculty even outside your field could be very valuable. And finally, your statement of purpose should indicate why you believe you are a strong candidate, which carries significant weight when evaluating a potential new student.
Good luck!
Upvotes: 3 [selected_answer]<issue_comment>username_2: Since you haven't indicated a country, this answer is only valid for US (and, perhaps similar places).
It is fairly common for people with a BS here to enter doctoral studies in a different field. This is due to the very general nature of the US undergraduate education, with only "a bit" of specialization in the major. US undergraduates get a "taste" of many different fields, some quite different from their main interest. US doctoral programs are usually defined with this in mind and the early year(s) have advanced coursework leading to some comprehensive exams demonstrating a broad knowledge of your field. If you have any "lacks" you can use this period to get up to speed.
So, having "only" a minor in English wouldn't necessarily be any handicap at all, and, there are surely instances of philosophy or history majors entering English for doctoral studies as well as the opposite.
The [answer of username_1](https://academia.stackexchange.com/a/170834/75368) lays out the usual application requirements (though GRE is fading a bit in importance). Note that letters of recommendation are relatively important in US (compared to some other places) and you need to make your case about your future in the SoP.
I'd suggest that if you keep your hand in both fields, even without a double major and make enough faculty contacts so that you get good letters in either field, then you should fare OK.
Upvotes: 1
|
2021/07/07
| 561
| 2,425
|
<issue_start>username_0: I find that many professors are to some extent supervising employees in the industry. They are helping the industry guys publish papers or solve difficult problems.
I don't know what motivates them to spend their time on that. I thought there are three possible reasons: 1) they can introduce their students to the company; 2) they keep up with industry trends through that method; 3) they are paid. I once heard that some professors have some side hustles and earn some consulting fees, but I am not sure.
How are academia and industry bridged?<issue_comment>username_1: "Supervising" may be overstating the case. Collaborations, of course, can be valuable if a faculty person is interested in applications of theory to real world problems. The same collaborations can be valuable to industry if theoretical expertise is needed and not present in the company. Papers arising out of such collaborations can be as valuable as any other.
Some industry folk work in the other direction, coming to the university to collaborate on things of mutual interest.
The collaborations I've done haven't been paid, but resulted in fairly informal grants that I could use for travel and other research support. Some (not all) of the (industry) people I worked with were doctoral students at the university and their research was peripherally related to the project, though not directly. It was a learning experience all around. One of the projects I was involved with resulted in a major change of direction in an important segment of the company. They were, in fact, seeking solutions to important problems from academics. A couple of us were able to provide that assistance.
Side hustles, such as a separate business (an accounting professor working for an accounting firm, say), are possible and do happen, but most universities will control them in some way, often by forbidding them for full time faculty. They may be more important in those places where faculty salary is very low.
Upvotes: 1 <issue_comment>username_2: Just as ballast to other ways of thinking about this: I myself will spend effort+time on questions that interest me, and where I think I can be helpful. :)
Yes, having a tenured faculty position makes this possible... but/and I do attempt to act in the interest of the people who pay my salary, namely, as <NAME> once put it, "the farmers and workers of Minnesota".
Upvotes: 0
|