date
stringlengths 10
10
| nb_tokens
int64 60
629k
| text_size
int64 234
1.02M
| content
stringlengths 234
1.02M
|
|---|---|---|---|
2021/07/27
| 812
| 3,545
|
<issue_start>username_0: I was originally an undergraduate student from pure mathematics and am planning to transition into biostatistics/bioinformatics/statistical genetics. To accumulate experience, I planned to apply for some **internships** but lots of them require 2 recommendation letters. I have, for sure one of them from my master thesis supervisor. For another one, I am choosing from a Ph.D. student or an assistant professor.
For the assistant professor, I took a 1-month summer project in applied mathematics (ecology) with him. It has nothing to do with biostatistics/bioinformatics/statistical genetics except some programming experiences. I met him once per week face to face.
For the Ph.D. student, I am currently working under him on a topic related to biostatistics/bioinformatics/statistical genetics (collaboration), which will possibly lead to publication. But still, the topic is not directly related to the internship project. I had never seen him face to face due to his different geographical locations.
Who should I ask recommendation letter from? Any suggestions?<issue_comment>username_1: As a graduate student, I wrote multiple recommendation letters for students who took my courses, where I helped shape their research projects providing above and beyond feedback. I emphasized that recommendation letters would carry more weight (in the context of graduate school applications) from professors to these students, and they made sure to have strong references from professors in that regard.
I think the point of decision here should be ***who*** can write you the strongest letter. It might be the PhD student in this scenario based on what you've described here, but only you can determine who will be able to speak to your strengths and qualifications for the internship best. In the case of an internship, I'm unsure how rank will carry influence on the recommendation letters in review. From my experience, the best references are as I've described above: strong and showcasing your strengths/qualifications for a given position. A possible strategy could be to ask the PhD student first, and if they decline, ask the the assistant professor.
Upvotes: 4 [selected_answer]<issue_comment>username_2: Is there a professor in charge of the project you're working on with the PhD student? This professor is a natural person to write the letter. They probably don't know you, but they could ask the PhD student who's supervising you to give them two or three paragraphs (or more, if the PhD student is willing) to include in the professor's letter.
Upvotes: 3 <issue_comment>username_3: Since you're going at least a little bit out of your original discipline, slightly odd references are not a problem. Undergrads rarely have rock-solid, gold-plated credentials in the first place. If the prospective position required a real hot shot, they wouldn't be hiring an undergrad intern.
And not enough reputation to reply to a comment, but yes, there are "PhD students". At a few schools those folks are just called "grad students" and it's implied that they're trying to finish a PhD (and if Google poaches them, they get an MS if they made it through 36 hours). At most schools in the US, a student has to be explicitly admitted as a PhD student, and this may or may not require already having a Master's degree - I've seen a couple of engineers with a BS and a PhD. Once they complete the PhD qualifier exam(s) they become a PhD Candidate. There are outliers like the (rare!) MFA in Computer Science.
Upvotes: 0
|
2021/07/27
| 1,497
| 6,284
|
<issue_start>username_0: [arXiv](https://en.wikipedia.org/wiki/ArXiv), by itself, and not as a preprint server, is already moderated by top computer science researchers at Cornell.
I understand if it's not the top journal, but isn't at least getting into arXiv actually more reputable than submitting directly to a lower-tier journal? Especially if 90% will just publish it anyway.
There is also [evidence](https://arxiv.org/pdf/cs/0603056.pdf) that arXiv papers are cited more, although this isn't conclusive.<issue_comment>username_1: The standard for the arXiv is "looks like an actual paper on the first glance". So the fact that a preprint appeared on the arXiv isn't really adding any respectability to it that a researcher in the field couldn't spot themselves. The point of the filtering the arXiv does is instead about avoiding spam.
Upvotes: 5 <issue_comment>username_2: I **strongly** encourage you not to trust *anything* on arXiv (unless it has a gazillion citations).
I had the chance, many times, to (officially) review papers that I "noticed" beforehand on arXiv.
As someone else mentioned "at first glance they looked OK". Then, after reading into them, I started to notice a lot of issues. And I'm not talking about small inconsistencies; I am talking about significant problems that completely invalidated the papers' main claims.
Just to give you two examples.
* A paper once stated in the abstract "Our method is superior than the state of the art method X by y%". Then, after reaching the experimental section, I discovered that the authors used a completely different setup than what was used to evaluate the original method X.
* Another time the abstract stated "our attack breaks the state of the art technique X". Then, later, the authors wrote "we did not implement X as it should be done".
Any reviewer would spot such issues right away and reject these papers even in low-tier journals because they are flawed.
Note that I'm not saying that *any* arXiv publication is trash. There may very well be (actually, there *is*) good source material on arXiv. However, while you can take any peer-reviewed publication (almost) at face value, when it comes to arXiv you need to thoroughly question its validity first - and use not just a grain, but an entire bag of salt.
My bottom line: "publishing" on arXiv is not devoid of meaning, but the *scientific value* of arXiv papers is either negligible, null, or negative. I would take a low-tier journal paper over an arXiv publication any day, if given the choice.
Upvotes: 4 <issue_comment>username_3: First, arXiv is *hosted* at Cornell but this does not mean it’s moderated by Cornell people (that may be true in CS but it’s not true in physics, where the pool of moderators is quite wide).
Next, different disciplines have different traditions, so beware of self-selection. In my field of physics, there is now a growing trend where authors upload on arXiv only *after* they have received comments from a journal referee. This avoids major updates that may result from the refereeing process, minimizes the number of arXiv versions, and also prevents the material from going stale before the next submission if it has to be resubmitted (nothing worse than a paper that has been “sitting” in arXiv for 18 month). Thus, some fraction of authors only submit to arXiv after third-party validation through usual refereeing channels.
ArXiv is fantastic to get pre-publication feedback - someone will provide additional references, will point to possible improvement in a proof, will ask for additional clarifications. However, be reminded that moderators keep out the obviously bad and crackpotty stuff, but that’s about it. Since arXiv is easily searchable and open access, it is no surprise that submissions there will have greater visibility than if only published in a journal behind a paywall.
Very few active researchers will referee more than one paper per month because refereeing is quite time-consuming. *A fortiriori* no moderator has time to do any refereeing-level checks on their share of daily submissions to check: one would spend 24hrs of everyday checking papers.
Upvotes: 6 [selected_answer]<issue_comment>username_4: Moderation on arXiv is extremely quick - papers typically go out within 24 hrs of submission, except for weekend submissions. The moderation process is described in this [arXiv blog post](https://blog.arxiv.org/2019/08/29/our-moderation-process/):
>
> The team performs checks with software support to confirm that the submission is classified to the appropriate subject category, that it is in compliance with our technical requirements, including formatting and metadata information, and that the content is appropriate, which, generally, describes content that would be refereeable by a conventional journal and in the format of a research article.
>
>
>
In particular they are mainly checking that the author has submitted it correctly, including proper classification, as well as filtering out anything obviously inappropriate, e.g. if it isn't even an attempt at writing a research paper.
The moderation process is not about whether the paper is correct or not, it's whether it's a well-formed paper. Saying "this paper got on arXiv so it must be decent" would be like saying "Microsoft Word didn't find any errors in this paragraph, so it must be true".
Upvotes: 4 <issue_comment>username_5: arXiv itself isn't intended to be treated as a publisher. Submitting a paper to a low-tier journal helps its credibility at the very least because at least it receives some peer review. Posting a paper on arXiv just means you satisfy the requirements of their site and you have some credential (i.e. a university email address) to post papers. A good rule of thumb is to verify a paper has undergone peer review and publication in some trustworthy journal, and then read its preprint on arXiv.
Regarding your point on citations: Many documents have been cited from arXiv because authors post their preprints there before formal publication in journal or conference proceedings and the formal publication is not available. Or, it's just because people don't want to pay to read a document on a site like IEEEXplore and just cite the preprint on arXiv.
Upvotes: 2
|
2021/07/28
| 1,053
| 4,646
|
<issue_start>username_0: >
> But <NAME>, a mathematician and Fields medalist at the University of Cambridge, wants to go even further: He envisions a future in which theorem provers replace human referees at major journals. “I can see it becoming standard practice that if you want your paper to be accepted, you have to get it past an automatic checker,” he said.
>
>
>
<NAME>, [How close are computers to automating mathematical reasoning?](https://www.quantamagazine.org/how-close-are-computers-to-automating-mathematical-reasoning-20200827), Quanta Magazine, August 27, 2020.
---
Is anything like this being done? I'm inclined to say it is, if they already have the tools, even if they don't publicly admit it.<issue_comment>username_1: ### Probably not.
Scientific papers, even papers in mathematics, are written in natural language - sometimes fairly idiosyncratic natural language, at that. In order for an automated theorem solver to solve a theorem, the theorem will need to be rendered into a form of structured data that the computer is able to understand. While I'm not too familiar with them, I imagine that it probably involves the mathematician manually entering all the different variables and parameters into the solver, right?
Natural language comprehension by computers is currently fairly primitive, and is largely limited to the computer determining things like that certain words are more likely to occur in combination with other words, rather than demonstrating any true understanding of their meanings. As a result, getting a computer to parse a scientific paper written in natural language would be a very difficult task!
Upvotes: 3 <issue_comment>username_2: There's literally zero chance any journal is doing anything like this. There's been exactly one instance of serious computer verification of a substantial portion of cutting-edge research within the timeframe of the refereeing process, and it's the [Liquid Tensor Experiment](https://xenaproject.wordpress.com/2021/06/05/half-a-year-of-the-liquid-tensor-experiment-amazing-developments/) where a group headed by <NAME>lin verified a key portion of new results of Clausen-Scholze. It was a big surprise to a lot of people that this was possible, and it's certainly not being done routinely or secretly.
Upvotes: 5 [selected_answer]<issue_comment>username_3: What <NAME> suggests can't be done in secret or as a simple add-on to an otherwise unchanged review process at the current stage of technology.
The suggestion, as I understand it, is that at some point in time, journals may require authors to submit along with a traditional article a machine-checkable proof of the correctness of the claimed results. The author would, hence, work with a proof assistant to produce a formal proof of a formalized version of their main results. Successful verification of the formal proof would then allow reviewers to focus on assessing the novelty and importance of the results, without having to worry about correctness too much. They would still have to be convinced, though, that the formalizations of any results given are really formalizations of the theorems shown in the paper under review.
I think for this to happen the work factor of formalizing a typical mathematical proof would either have to fall to a level not much larger than writing up the paper (assuming that at least one of the authors is skilled in working with a proof assistant), or the production of a formalized proof would have to become a routine part of doing research. In the latter case, submitting a formal proof of one's results would be similar to submitting source code as supplementary material to a computer science paper.
For either of these to happen, proof assistants would have to become much more powerful and intuitive to use than they are today. That does not necessarily imply, though, that sufficiently useful proof assistants will only be available in the far future. Fully automated machine translation, for instance, has gone from being regarded a fairly intractable problem to being a problem with good, widely available solutions in just a few years.
The problem of automatically verifying claims in a paper written in natural mathematical language is substantially harder still. I would be tempted to say that achieving this probably requires human-level AI, but on the other hand one has to admit that previous claims of AI-completeness have failed in various domains.
The romantic notion that mathematical reasoning is based on some miracle that can only happen in human brains is, however, nonsense of course.
Upvotes: 3
|
2021/07/28
| 933
| 4,069
|
<issue_start>username_0: **Background:** I'm an undergrad math major, starting my third year of college soon. This summer, I worked on a reading project (some graduate-level math topics) under the guidance of a professor from a foreign institution (all interaction was online.) Besides reading, I also prepared the *solutions' manual* of a significant portion of the textbook I was reading (not readily available on the internet, so definitely a good deal of original effort involved here.)
**The issue:**
1. I wish to put this on my *Curriculum Vitae*, but I'm not sure how to.
2. The professor hasn't been very responsive with regard to any recognition for the project either, though they have showered a lot of verbal praise (trust me - everything went great.) Of course, I'm grateful for whatever I learned over this summer, but also it seems important to have some sort of **concrete recognition** of the same for obvious reasons.
3. Did I make a mistake by not discussing this with the professor in advance? In any case, if reading projects of the kind that I described above are generally recognized, what should be my course of action at this stage (or can nothing be done)?
Thank you!<issue_comment>username_1: My view is that, yes, you should put it on your CV or resume or whatever you want to call it. That is more important for an undergraduate than for someone with more experience, however. And yes, as comments suggest, ask the prof for a letter as well if you think it would be supportive.
Note, however, that my experience is with the US, in which graduate admissions is based on a broad view of the candidate with many things contributing to acceptance. Things that are unusual, if positive, should be mentioned. If your readings and work were for a non-typical undergraduate subject, all the better. It shows interest, hard work, and dedication.
It won't be a major thing, but, at the margins, can make some difference. You might make it a separate section on a CV: "Additional Scholarly Activities".
Whether you made a mistake or not is irrelevant as the past is the past. But it is worthwhile keeping up the contact if possible.
Upvotes: 2 <issue_comment>username_2: Answering your questions in order:
1. Yes, it would definitely be effective to list on your CV.
2. From my experience with professors, supervising students, and interacting with faculty around supervising students, faculty don't always have top of mind how to recognize concrete work. A reason for this has been, to some extent, changing conventions of how to recognize work between academic generations (i.e., when they were in grad school the conventions might have been different).
3. I think it probably would have been helpful for you to consider with the professor how you would describe this project on a CV. And, given that the project went really well by your own description, and since it's your own CV, you have some flexibility on how to describe it. See below on my suggestions.
I would agree that you could put it under "Additional Scholarly Activities" or a "Projects" section, but it would likely be helpful to you to call-out, in the title of the section, the fact that it was *graduate-level math topics* you were dealing with. Because you're still an undergrad this has special significance to an admissions committee, in that you've demonstrated competency engaging with graduate level course work before even being in grad school. In other words, it speaks to your potential ability to succeed in a grad program.
My suggestion would be to:
1. First, have a separate section on your CV labeled "Graduate-Level Coursework" and describe the reading project from a topics perspective like you would describe any other kind of coursework.
2. Second, create a section that is called "Unpublished Solutions Manuals," or a similar type of phrase, and list the solution manual you created in the citation format used in your field.
This way, you get credit for both the coursework and the work product (the manual) you produced.
Upvotes: 2 [selected_answer]
|
2021/07/28
| 1,373
| 5,528
|
<issue_start>username_0: First time here, I hope I do well asking this question. I didn't any answer.
This is my situation: I presented a research in a colloquium last June. Now I want to publish it as an article. Is it OK if I send an abstract to several journals at the same time simply asking if they would be interested in the full article?
I mean, I guess it would accelerate a lot the process of getting my paper published if I don't have to wait weeks or months before being refused simply because my paper is not interesting to them...
I'm new in the academic world, so I don't know if it's unethical.
What do you think?<issue_comment>username_1: **I recommend not to do this.** If you think that a journal might be a good fit for your paper send the whole paper and if you think that it is not, do not send them the paper at all. Of course the editors might think otherwise but I think that such a decision can hardly be reached by looking at the abstract only. Whether the paper fits does not only depend on the topic but also whether it is written with the intended audience in mind.
Another aspect is that editors tend to be rather busy and what you propose does not really fit into the [usual workflow of a journal](https://academia.stackexchange.com/a/55666/45543).
**Edit** based on comments by [<NAME>](https://academia.stackexchange.com/users/33210/richard-erickson) and [Matt](https://academia.stackexchange.com/users/15762/matt): the above does not necessarily apply to all journals since there are some which encourage presubmission inquiries. There is also an [editorial by <NAME>](https://link.springer.com/article/10.1007/s10964-019-01008-z#article-info) which also deems these inquiries as potentially problematic and unneccessary.
Upvotes: 5 <issue_comment>username_2: You are not stating your discipline, which may make a difference here. In applied computer science, this would be utterly pointless - if you get any answer at all to such a request, it would be *"we need to review the full article before we can say if it can be published"*.
The only cornercase where it may be useful to ask the Editor-in-Chief prior to submission is if you are unsure if your submission is in scope of the journal (most likely because you are doing interdisciplinary work), and even then the answer is often a lukewarm *"depends on how strong the contribution of your work in < area of the journal > is"*.
Upvotes: 4 <issue_comment>username_3: What you want to do is usually called a *pre-submission inquiry.*
It is somewhat common for high-level journals in some fields, where a major hurdle is convincing the editor that your paper is relevant enough or on-topic for the journal, before it enters peer review.
The outcome of pre-submission inquiries is not binding, but it may give you a better idea of your chances of getting into review.
Some journals even require pre-submission inquiries for certain paper types (such as review or method papers).
However, in other fields it is unheard of.
Some journals explicitly state whether they accept pre-submission inquiries ([example](https://www.nature.com/nature/for-authors/presub)).
If you are in a field where pre-submission inquiries are a thing and time is critical for you for some reason, you might consider this.
The major advantage of pre-submission inquiries is that you can send them to multiple journals at once, whereas you can only properly submit your paper to one journal.
(I list some [further advantages here](https://academia.stackexchange.com/a/159293/7734), but I don’t think they apply to you.)
If on the other hand pre-submission inquiries are not normal in your field, you’ll probably just waste everybody’s time and annoy the editors, who are unlikely to give you a constructive response.
Upvotes: 4 <issue_comment>username_4: Follow the instructions each journal provides to authors about how and what to submit.
Being a newcomer to academia, you should assume that your ideas for how to improve the peer review system, however logical and efficient they might seem to you, are not going to be an improvement over the current system.
Upvotes: 3 <issue_comment>username_5: The process, on your part, involves doing your homework toward figuring out if a manuscript would be a viable candidate for any journal you send it to before submission.
Read the list of articles in the last few years, and if any seem to be in the area of your work, go read the papers and see if they're along the lines of what you're doing. Look at where the papers you cite in your work are published. Look at where the leaders in your field publish their papers. If you need the help of a colleague to do this, get that help.
I appreciate your wish to speed up your process, but your idea is off base. You should *never* send any portion of a document to more than one journal. You're asking for confusion and problems, and you might even find your abstract published in isolation if all goes wrong. Also, you're passing along a job that you need to do to others.
Upvotes: 3 <issue_comment>username_6: You are asking the wrong question. Get someone from your university involved to help you getting your work published. Either some professor you know or a person from the colloquium audience.
They might help you to improve your paper in the first step, like a internal review. Then they might name you a suitable journal.
Overall this will help the quality of your paper and increase the success rate.
Good luck!
Upvotes: 2
|
2021/07/28
| 1,041
| 4,545
|
<issue_start>username_0: I have been working as a postdoc with my PhD advisor since June 202O. Since March 2021, my funding comes from an industrial project X that is very new to me. I never worked with the simulation technique before. I spent March and April to install the software, trying out basic examples and understanding the theoretical foundation.
Since May, I have not worked on the X project. I have been working on my projects from last year and other collaborative works, which has resulted in the submission of one first author paper and two co-authored (2nd author) papers. And, I am also finishing writing up the first draft for another first author paper.
In addition, I have been feeling homesick and unmotivated for the past 6 months. I have barely worked 20 hours/week for the past 3 months.
I have had meetings with my advisor and the industrial partners every month since March and I have been presenting the theory or just some basic stuff. I am yet to do actual work on the project. I was planning to complete a majority of the work by August end. Now, I am totally at loss.
I am afraid I am going to be fired. How do I get out of this? My contract ends in June next year and I have nowhere else to go this year if I get fired!
What can I say to my advisor and the industrial partners? Should I tell them the truth during the next meeting (to be held next Wednesday) that I have not worked on the project for the past 3 months?<issue_comment>username_1: ### Talk with your supervisor first.
Sure, you're behind schedule because of working an inadequate number of hours over an extended period of time, but you know what won't help? Covering it up and pretending everything's fine until the deadline comes and there's nothing to show for it.
Come clean, talk to your academic supervisor, and work out a plan for the two of you to meet the requirements of your industry partner in time to deliver something to them. I imagine that one of the steps of this process would be to increase your hours up to the equivalent of full-time work!
Upvotes: 3 <issue_comment>username_2: I would just start working and on the next meeting honestly present what you have done (installing the software, trying out basic examples and understanding the theoretical foundation counts towards that though your partners would, probably, expect a bit more) and what is the next stage (I hope that you haven't made the mistake of claiming that something that you haven't touched yet is ready and functional; that would really screw things up). You don't need to state it explicitly that you didn't touch the project for 3 months or to explain all reasons for *why* the things go slowly unless somebody asks directly, but you certainly need to give your partners a very clear idea of where you stand now, what has already been accomplished, and what difficulties you are currently facing in the process, and that is something that should normally be done during every meeting from now on.
If you can get something working by the time of the next meeting and present it, it will surely allow you to avoid many awkward questions. It doesn't matter whether it is big or small, just make sure that it works and goes beyond "general theory and basic stuff" you talked about all the time before.
How much to disclose to your advisor beyond this on your own accord depends on your relationship with him/her. Just play it by ear and be reasonably honest (meaning "always say the truth, but, perhaps, not the whole truth"). If you are new to simulations, your advisor should be aware of that, so you'll get some "discount" based on this fact. In general, normal people understand that hiccups and delays may occur now and then for whatever reasons and have learned to tolerate them. What they usually don't tolerate is either complete stalls, or false progress reports.
Setting up a work plan with your advisor, as username_1 suggested, is a great idea regardless of whether you are falling behind or not, so you can just approach him/her with "Here is where I stand now; the things go slower than I expected, so I got a feeling that I need a clear plan and some of your help to meet the deadlines; what can we do?". I believe that your advisor is more interested in having the project completed than in finding someone to blame for its failure, so he or she would spend most time discussing with you how to get things done rather than why they haven't been done 3 months ago in that case.
Good luck with straightening things out :-)
Upvotes: 2
|
2021/07/28
| 305
| 1,332
|
<issue_start>username_0: I am preparing a grant application, for a project in physics (experimental optics).
So, my question is: how much would it be reasonable to plan for "publication expenses" in the budget, in one year?
I plan to publish one article in one year.<issue_comment>username_1: Well the easy answer is to look at the possible journals you would publish in and see what they typical charge.
But you have to be careful, I've found some funding agencies wont fund publication charges (either directly, or because that funding comes from a separate agreement with a University). The funder may also have a cap for how much can be charged to each line item (like publication charges).
As with all funding questions talk to your University, as they have people that specialize in this and know how to deal with each funding body (as every funder seems to want to do things slightly differently). Also talk to other people in your department (they might let you see their past grant applications) to get an idea of what you can and can not charge.
Upvotes: 3 <issue_comment>username_2: In the year 2021, you should budget about 2000 EUR or USD per publication in your grant application.
For certain funding agencies, you should expect to not actually get the money if you receive the grant.
Upvotes: 3 [selected_answer]
|
2021/07/28
| 772
| 3,088
|
<issue_start>username_0: What I understand is,
* an adjunct professor is a part-time teacher
* a professor-emeritus is a part-time teacher who went past his full-time (tenured) period, but either the university didn't let him go, or he didn't want to retire
Is my understanding correct?
If YES, I am confused. Coz, I found some adjunct teachers to be the dean of the faculty. How can an adjunct teacher become a faculty-dean?<issue_comment>username_1: An adjunct is someone without a continuing contract who teaches courses with only a contract for that course (or a few courses). They are "temp" employees and often get only minimal benefits such as contribution to retirement and such (US). Pay is normally abysmal.
A professor emeritus may teach or not. It is an honorific often/normally given to someone who retires from the profession in good standing. In some places I think it is automatic, but in some it is specifically granted.
A person might do both, teaching on an adjunct contract after having retired (and given up tenure). Some of these may be paid more than the normal adjunct.
In the case of the dean, it was probably just that the dean position was achieved in the normal way (usually a tenured professor advances) but had no teaching duties. If they wanted to also teach a course they might be given an adjunct contract for it (probably paid, but possibly not). It wasn't that the adjunct got to be dean, it was the other way round.
I leave out the very exceptional case where a superstar is an adjunct and is later appointed dean. I know of no instance of that.
Upvotes: 5 [selected_answer]<issue_comment>username_2: Both of your definitions are too restrictive.
Even in the US, the term "adjunct professor" is used in at least two ways. The older meaning was for an extra appointment to a department for someone employed in another department or another institution. This meaning is still in use at some institutions, with "Adjunct Associate Professor" reserved for those with a regular position elsewhere. Such a position might involve no classroom teaching and no pay, for example, and just be there to allow this person to advise graduate students.
The newer meaning is a euphemism for part-time instructor. It is similar to calling a worker at a big box store an associate instead of a sales clerk. Words shift over time.
As to emeritus professors, there is no requirement that an emeritus professor teach, although many do. The rank of emeritus professor is somewhat formal, given to some retired professors, but it can be associated with certain rights. For example, the right to apply for a research grant, get an office, and if very lucky, a good parking sticker.
You need to know how these ranks are defined at a specific institution to make sense of what is happening. The dean might well have been a full professor at another institution and an unpaid adjunct full professor at the current one, then been hired as dean. Or not. It is hard to say. I would look at this person's CV to get a sense of what they have done over the years.
Upvotes: 4
|
2021/07/28
| 3,799
| 16,108
|
<issue_start>username_0: This may be a weird question, but I often wonder why the default way of writing about a given methodology is describing the method as if it "fell out of the blue". Admittedly, not all research work is written like this and perhaps this is not an issue for very experienced researchers/practitioners, but I believe it would be much more insightful if authors wrote the steps that led to the specific decisions when designing a new method. So, instead of **describing** a new method, the author would write the problem and motivation (this is usually done), and then elaborate on the **thought process** that led to the final design choices. I don't mean hand-holding on basic stuff, but to describe things like "we wondered if there was a way to solve X. Y is a popular method to achieve Z, a property required when solving X for the reason K. However, Y needs to be tweaked in the following manner so that it also asymptotically satisfies W" or "we tried this way of solving it, but it did not work due the following reasons [insert reasons], so we decided to try this instead". Was a certain property coincidentally satisfied by your decision choice, or did you reverse engineer a method that satisfies that property?
This may seem like a childish idea and most people probably don't have the time or interest in such a writing style (or may even feel insulted by overly detailed descriptions). However, I believe it would be a much more valuable contribution to science because:
1. Researchers would learn from each other different ways of thinking about a research problem and the strategies to solve it
2. Researchers interested in building on the presented work would know which ideas did not work (preventing them to pursue dead-ends), and if they know better ways of solving a particular sub-problem they could easily improve the method.
What are your thoughts on this?<issue_comment>username_1: Thought processes are messy.
If you wrote out the thought process behind the typical papers I contribute to, you'd have to distill hours of weekly meetings, circular avenues where the same idea comes up 2 or 3 or 2 dozen different times before it gets incorporated definitively, endless iterations of experimental design, dead ends and failed experiments where everything starts over, side projects that unexpectedly inform a central one, shower thoughts, and pub inspiration.
In a good writer's hand maybe it could turn into a novel, but it wouldn't be the most direct way to explain the main findings of a work and contextualize those results within the literature. As <NAME> comments, history of science books often do get into the "process" behind discovery and they can be a great read (Ruse's The Darwinian Revolution is a personal favorite), but it's not reasonable to expect every scientist to go into this level of detail for every paper nor reasonable to expect their audience to invest the time to read it.
I do see "thought process" content in many papers, but at that point it is usually made into a much simpler, linear story to avoid all the meandering.
Upvotes: 6 <issue_comment>username_2: In papers from my field (chemistry/materials science), it is actually quite common to do exactly what you suggest. However, you will not find anything of that in the typical methods/experimental sections because those are reserved for a description of **what** was done. The **why** is part of the discussion part of papers where authors explain what the underlying reasons are for their results, but also should justify their methodology if it is not obvious. Sometimes, this involves giving short mentions of attempts that failed to give the desired outcome.
Upvotes: 4 <issue_comment>username_3: My answer is from a **mathematics** perspective.
I like the analogy of exploring an alien landscape for doing research in mathematics. In this analogy, the ideal mathematical paper reports on having found some astonishing landmark together with useful instructions of how to get there.
Good instructions for how to find a place will look very, very different than the journal of the first explorer to get there. In the latter, you'd have stuff like
*I was trying to reach the summit of that mountain, but at some point I found myself separated from the summit by a deep gorge. But when I looked around, I spotted that serene lake between the tree tops. So I climbed back down and tried to make my way to the lake. Then I caught malaria and walked circles for a while hallucinating. After recovering, I made a lucky guess and stumbled upon the lake again, and it really was very beautiful.*
But when we are actually dealing with math research, it probably makes much less sense. Reaching the point where you understand an idea well enough to even tell a fellow researcher about it can be hard work. There is "For a few months I thought about whether X could be a good way to attack my Y problem. I don't remember why I ever believed that this might have worked out". There is "one morning I woke up, and under the shower it suddenly all made sense". None of these are particularly helpful for a reader.
Upvotes: 5 <issue_comment>username_4: Also from a **mathematics** perspective.
I was working on a problem for two years now, and, recently, in the process discovered another (in my opinion) nice connection/result. I wrote a paper about that (under review). I also felt it might be helpful for a potential reader, to better evaluate and understand the results, to give away some history and thought processes involved.
However, from a logical point of view, it seemed more natural to me to present it the other way around: first present the discovered result and after that the application to the original problem that brought me there. Nevertheless, I gave a "personal story" in the introduction, I wrote something like this:
>
> Now, maybe it is best if I give a little bit of a personal story about
> the results presented here, and that they are, in some sense, the
> result of two strands of thought.
>
>
> [... following approximately one
> page of text, describing a collaboration, where the original problem
> came from and how it led to the present results...]
>
>
> As is most often
> the case, and to have a clean separation between a more
> group-theoretical part and a more automata-theoretical part, the
> presentation does not follow the order of discovery in this sense.
>
>
>
After that, I continued to present the results. I hope the reviewers like my "historical account" ;)
Upvotes: 1 <issue_comment>username_5: When you read a fiction book, you never have the author's thought process on how he designed the book. You may have the narrator's thought process in some books, but not the author. You only get the final output.
When you watch a movie, you don't have the thought process of the director included in the movie. You only get the final output. That would be dumb in many cases, as it would distract from the movie.
A research paper is this, the final output of a research process. A finely crafted piece of science whose goal is to convey information to fellow researchers in the most efficient and concise way.
If you want an example of reading about the thought process of some researcher, you can read *Birth of a theorem* by the mathematician and Field medalist <NAME>.
Upvotes: 3 <issue_comment>username_6: I am not sure that we really have access to our actual thought processes behind our research, just the post-hoc internal explanation that we remember. It would require us to actively analyse and record our thought processes as we go along and who has the time for that. This may just be me, but if I am working on something my ideas and assumptions change and evolve as I perform experiments (partly because the next experiment are shaped by the results of the previous ones). I don't think I am able to roll back to the version of me at the start of the project and actually evaluate what I thought at the time. I think it is a mild case of the incommensurability of Thomas Kuhn's paradigms (but on a smaller scale).
Generally when I write a paper, I try to create a logical progression of ideas to explain what I found out during the course of the study. This is very rarely the same as the historical progression of ideas, with all of its blind alleys and "my brain hurts" moments (again that may be just me).
We are all prone to post-hoc rationalisation of events, it is human nature. I strongly recommend the film [Rashomon](https://en.wikipedia.org/wiki/Rashomon) to anybody that hasn't already seen it. I doubt the [Rashomon effect](https://en.wikipedia.org/wiki/Rashomon_effect) named after it only applies to recollection of external events, or to situations involving personal gain/loss. I am not convinced that we are really objectively that aware of our thought processes and motivations.
Upvotes: 3 <issue_comment>username_7: I can speak to why this is the case in mathematics; I suspect this bled over into the physical sciences as well but that's just a hunch.
When I was an undergrad I tried to read Rudin's *Principles of Mathematical
Analysis* and at first I was totally perplexed -- seemingly out of the blue, he'd pull out the value of what some constant had to be in order to make the theorem go.
I felt very stupid and asked my dad about this.
He explained to me that this style of exposition is deeply ingrained in the culture of mathematics, going at least back to <NAME>, who famously said, **"A good building should not show its scaffolding when completed."**
He then gave me some strategies for reading, understanding, and writing proofs, and part of that involves reading and writing in an order other than the order in which it will be presented.
None of my professors gave me or anyone this advice, we were just expected to figure it out or fail.
Looking back, I think this was basically a form of gatekeeping.
There have been mathematical works that are presented in a different style, with much more background and exposition.
I think one of the most notable examples is <NAME>'s work on random number generation.
The [PCG RNG paper](https://www.pcg-random.org/pdf/hmc-cs-2014-0905.pdf) did not appear in a peer-reviewed journal for some time because the ones she sent it to objected to the style of her writing, which presents much more of her thought process than would be customary for a math paper.
You can read more about the history of the PCG paper from O'Neill herself [here](https://www.pcg-random.org/posts/history-of-the-pcg-paper.html).
The algorithm has since been included in several major software packages including numpy.
Personally, I find the PCG paper to be a great read, and I think the fact that she had such a hard time publishing it is a damning indictment of the culture of mathematics.
So, to summarize, the reason more papers aren't written in this style is because (1) it's an ingrained habit that comes down to the present day through Gauss and other early 19th century mathematicians, and now (2) the consequences of trying to change the culture around this include doing a lot of unrewarding work and getting rejected repeatedly.
Upvotes: 4 <issue_comment>username_8: Different people follow extremely different thought processes. Thus, it is not, in general, extremely productive to force all people to arrive at a certain conclusion in the same manner. A closely related concept is described quite well by physicist <NAME> in [this video on youtube](https://youtu.be/P1ww1IXRfTA)(section beginning at 55:01) (from a BBC special "Fun to Imagine").
Consequently, presenting with a focus upon the important conclusion is a strong strategy. If it is necessary, one can subsequently build "walls" around the core idea being presented by describing other attempts to solve the problem that were eventually unsuccessful.
Upvotes: 2 <issue_comment>username_9: (I’m coming mainly from mathematics, with some crossover experience in theoretical CS.)
**Most good academic writing does include a bit of “thought-process” explanation.** The style you describe as “the default way” — writing as though everything “fell out of the blue” — does exist, but it’s not the default in fields I know (though it was more common a generation ago, in some areas of pure maths). Papers with no motivating explanation at all are seen as unpleasantly dry; it’s not uncommon for referees to ask for better motivating exposition.
But it’s usually just a little: **Too much “thought-process” explanation is not as helpful to readers as you seem to expect.** I’ve read papers that tried hard to explain a deep and subtle thought-process, but came across just as impenetrable waffling, and left me wishing the authors had just stuck to the facts. The main problem with such attempts is described well by <NAME>: [Abstraction, intuition, and the “monad tutorial fallacy”](https://byorgey.wordpress.com/2009/01/12/abstraction-intuition-and-the-monad-tutorial-fallacy/)
So in sum, **good authors are certainly conscious of this aspect**, and usually choose to include some thought-process explanation, but not much. If you think more would be better, then be the change you wish to see in your field, and use a bit more in your own writing. But be aware of the reasons why most writers use only a little — remember the parable of [Chesterton’s fence](https://en.wikipedia.org/wiki/Wikipedia:Chesterton%27s_fence)!
Upvotes: 2 <issue_comment>username_10: For many research-level results I have obtained, the thought processes involved hundreds of wrong turns, including useless definitions and wrong proofs, and it would be really silly to include all those. Unfortunately, it is rare to be able to cogently deduce the results without significant trial and error, otherwise it would not be worthy of research, would it?
However, it is true that in many cases one can come up with a fictitious thought process obtained by splicing together the actually useful ideas that led to the final solution. And that ought to be included in any good exposition of the results. Sadly, there are still page limits for many peer-reviewed publication venues, putting a strain on the desire to give a verbose story of how to come up with the results.
Upvotes: 2 <issue_comment>username_11: Let me give a different perspective, coming from mathematics. Other answers have already explained the reasons why papers generally don't have so much thought process, so I won't rehash them here. Instead, let me contend that giving the full thought process has its own merits, and that more examples of this would be a positive thing.
The most prominent example I know of where the full thought process is given is in Grothendieck's *Pursuing Stacks*. It is hundreds and hundreds of pages long with endless digressions. Nevertheless, it has been influential, and even personally, it has taught me quite a lot per page, though I don't work in this field. In fact, it is by far the most enjoyable mathematical text I have ever tried to read.
Why do I think this is? Briefly, because I think it models discussing math with others much better than textbooks/papers. Generally speaking, discussing math with others is important for learning because it allows for the freedom of exploring ideas with another mind. Textbooks/papers are important because they give you shortest path to various goals and prove a lot of important results, which you can study on your own. So I see two needs which papers giving the whole thought process may fill:
1. For people who don't have people they regularly discuss a certain area of math with.
2. For people looking to just enjoy math, not concerned about publishing papers in a certain field.
Finally, let me add that even from a "practical" standpoint, texts like these may have the potential (apart from the actual ideas they may contain) to expose students to the unconstrained nature of doing research, which at least for certain students may be important to building their own tastes.
Upvotes: 1
|
2021/07/28
| 1,276
| 5,410
|
<issue_start>username_0: Job candidate has seven years of **adjunct** teaching experience, no full-time experience. Proposed statement in a cover letter for a tenure-track position requiring teaching experience: "I have **seven years of experience teaching** XXX to undergraduate and graduate students." Notice that the word "adjunct" doesn't appear in the sentence.
The issue here is wanting to conceal the fact that the candidate has been been searching unsuccessfully for a tenure-track position for seven years, which is a common challenge for most new PhDs in today's job market. The question of research vs teaching isn't at issue here, only how to phrase one's adjunct teaching experience where teaching experience is required.
Which of the following is true, and what evidence is there to support your choice?
(1) Adjuncts **should** state the number of years of their teaching experience because a significant number of years makes them more competitive. Someone who has taught for seven years is far more valuable to the university than a recent PhD graduate with little teaching experience. The former has a better understanding of pedagogy; a portfolio of teaching activities; skills in classroom management, online and in-person teaching, grading, conflict resolution, mentoring, etc.; and broad cross-disciplinary teaching experience. These experiences are of the same quality as those gained from tenure-track teaching.
(2) Adjuncts **should not** state the number of years of their teaching experience in the cover letter because adjunct teaching is not highly valued and drawing attention to it and to the number of years spent as an adjunct without finding a tenure-track position will make the candidate less competitive. The candidate should only state, "I have taught XXX to undergraduate and graduate students" without stating the number of years.<issue_comment>username_1: I don't see a possible way to "hide" how long one has been an adjunct. Surely that time is going to appear on the CV in some fashion; tenure-track hiring committees aren't going to stop at the cover letter.
I think it's best to be honest, but no need to be verbose or apologetic.
The emphasis on teaching versus research is very much going to depend on institution. At an R1 university in the US, faculty may be hired to teach a bit (maybe 20%-30% time to start, though field-dependent) but the primary hiring criteria for tenure-track is going to be research output and potential. If they were only trying to fill a teaching spot they'd be hiring an adjunct instead. It probably matters less how long you've been on the job market as an adjunct and more how you've been able to keep up research over that time. Unfortunately, it may not be possible to keep up a good research program on that schedule, and I think that's going to be a bigger contributor to difficulty finding a position than an implied "oh this person has been looking for a job too long, they must not be good".
At a primarily undergraduate institution where research is mostly a side-project for faculty and teaching responsibilities are primary (more like 80% teaching), then I think there would be a lot more emphasis placed on teaching experience, especially at a small institution where they are hiring you to be *the* instructor in some area and they're counting on you to stand on your own.
Upvotes: 3 <issue_comment>username_2: I don't think this is really an answer, but only some observations that might help.
First, some adjuncts are highly valued and do it for the love of teaching, not because they need the money and are otherwise career blocked. Some of the best ones I know are highly valued industrial researchers who just want a connection to the university. The money is chump change for some of them, I think. If they wanted a full time faculty position, they would be welcome.
Second, times are hard and the pandemic ain't helping. It was already bad before 2020. People understand that. We go through cycles in this. I finished my degree in very bad times. I was lucky to stay in academia at all, but the position I first took was far below my expectations and the expectations of my professors and peers. But, I gave myself no options and was able to build a career. But it would have been impossible if I'd had to put together a bunch of adjunct gigs to support myself. People generally know how bad it has been and how few opportunities there are compared to the number of graduates.
Third, if you have been doing other things such as industrial research along with adjunct teaching, don't neglect that as a positive aspect of any application.
Fourth, see if you can't exploit some of the contacts you already have at the places you've been teaching. Make it known that you'd like a permanent position.
Fifth, there are a number of very highly respected universities (CMU, Duke, Stanford, ...) that offer a position (this is in CS) called, perhaps, "Professor of the Practice". They are non-tenured, but full time and with long term contracts. They don't require the same research that tenured faculty are expected to do, but respect it when done. Many don't require a doctorate, but most of the folks I know have one.
Last, stress your dedication to teaching (and academia generally). Whether you say "seven years" or "several years", don't lose track of the fact that you can do a quality job.
Upvotes: 2
|
2021/07/28
| 490
| 2,117
|
<issue_start>username_0: I'm preparing a graduate level module for the next semester. With a more senior professor I have co-authored a book chapter that covers much of the material we'll be covering in class, so I'd like to include it. However, the book itself is not yet published. I expect to get editorial comments back before the semester starts, and to have responded to them. So:
Have you ever included draft or forthcoming materials in your course readings? If so how did students respond to this?
If you haven't, would you? Why/why not?<issue_comment>username_1: I haven't done this myself, but certainly would, even if the material was a bit less "mature" than having been submitted. However, I would also inform the students that this was happening, though not that I had doubts about it. If I had doubts, I'd want to resolve those first.
If I were giving them written drafts or notes, I might even be inclined to give some bonus points for errors found or places where explanations confused the students. Sometimes our best efforts at explanation miss the mark. But students can catch that, perhaps better than expert reviewers who might have the same bias as the author.
I think that for a graduate course this would be extremely natural to do. A bit less for undergraduate, but I wouldn't arbitrarily exclude that.
For a text book in preparation, it is very useful for students to suggest exercises based on the material. It is a different way of thinking that can be as valuable as actually doing exercises. But, again, probably more relevant at the graduate level.
I'd know that I experienced it a bit on the student end during my studies. In fact, one of the best math courses I ever had was in my second year of undergraduate where the professor gave us material from a book he was working on. It is one of the things that solidified me as a mathematician, actually. But he was clear that this was his "work in progress".
Upvotes: 2 <issue_comment>username_2: I have done so. In my case the main reason was to illustrate how research happens by showing them work in various stages.
Upvotes: 1
|
2021/07/28
| 526
| 2,319
|
<issue_start>username_0: I’m a fourth year undergrad majoring in math at a university in the US. I’ve been working independently on an elementary graph theory problem for a while and I’ve found some nice results. It’s not revolutionary, but based on some research I don’t think anyone has looked at this problem before and it seems interesting. I intend to apply for PhD programs in Theoretical Computer Science. While I do have some research experience with a couple of professors (that should hopefully lead to decent recommendation letters) and a strong academic record in math and computer science, I don’t have any publications. Would I benefit from publishing my graph theory work in an undergraduate research journal, or do only regular journals count? I appreciate any comments, thank you!<issue_comment>username_1: In the US, everything counts. So, yes, it would be a positive thing to do this and list it on your CV if accepted.
Graduate admissions (doctoral) is very broad based and takes in to account many things. What a committee is looking for is clear evidence of the high likelihood of success of admitted students. Doing undergraduate research is a definite plus and having a publication to show for it is even better.
Doctoral admissions can be very competitive and few undergraduates have very much if any research experience. Fewer can show a publication.
Also, get a letter of recommendation from whoever advises/guides you on the project. Letters are also very important.
As Azor Ahai suggests (I think), a regular journal would be better if you can get accepted. Talk to your advisor about tradeoffs here, including likely time to publication.
Upvotes: 2 <issue_comment>username_2: As someone who has sat on multiple admissions committees at the PhD level, any sort of peer-reviewed publication is viewed highly positively in the decision-making process. That said, undergraduate-focused journals are viewed as having a lower barrier to entry relative to regular (non-UG focused) journals, which is a proxy for the strength of the findings presented in the publication. Depending on your timeline, you could try a regular journal first and if it gets rejected, then try a UG-focused one. If your timeline is tighter for the applications, I would go straight to the UG-focused one.
Upvotes: 3
|
2021/07/28
| 492
| 2,129
|
<issue_start>username_0: After making a presubmission inquiry to a journal with a high impact factor (NEJM), I received a reply that a Research Letter could possibly be considered for publication.
I am not familiar with Letters, my question is should I try to publish a Letter in a prestigious journal, or should I aim for an ordinary research article in a normally prestigious journal, such as BMJ or similar?
I am a postdoc and I want the most bang for my efforts, but I am not sure if a Letter in a prestigious journal counts heavier than an ordinary research article in a normally prestigious journal?<issue_comment>username_1: In the US, everything counts. So, yes, it would be a positive thing to do this and list it on your CV if accepted.
Graduate admissions (doctoral) is very broad based and takes in to account many things. What a committee is looking for is clear evidence of the high likelihood of success of admitted students. Doing undergraduate research is a definite plus and having a publication to show for it is even better.
Doctoral admissions can be very competitive and few undergraduates have very much if any research experience. Fewer can show a publication.
Also, get a letter of recommendation from whoever advises/guides you on the project. Letters are also very important.
As <NAME> suggests (I think), a regular journal would be better if you can get accepted. Talk to your advisor about tradeoffs here, including likely time to publication.
Upvotes: 2 <issue_comment>username_2: As someone who has sat on multiple admissions committees at the PhD level, any sort of peer-reviewed publication is viewed highly positively in the decision-making process. That said, undergraduate-focused journals are viewed as having a lower barrier to entry relative to regular (non-UG focused) journals, which is a proxy for the strength of the findings presented in the publication. Depending on your timeline, you could try a regular journal first and if it gets rejected, then try a UG-focused one. If your timeline is tighter for the applications, I would go straight to the UG-focused one.
Upvotes: 3
|
2021/07/29
| 620
| 2,914
|
<issue_start>username_0: A photonics industry (field is physics/electrical engineering) conference has [the following submission guideline](https://spie.org/conferences-and-exhibitions/photonics-west/presenters/abstract-submission-guidelines):
>
> Note: Only original material should be submitted. Commercial papers, papers with no new research/development content, and papers with proprietary restrictions will not be accepted for presentation.
>
>
>
The only other conference I have been to, there was no distinction between presenting unpublished or (recently) published material. I have not yet published results on my current project, but does it break this rule if I publish between abstract submission (due August) and the conference (in January)? If I don't publish before the conference, is it redundant if I try to submit these results to a proper journal after the conference?
Thank you for the help.<issue_comment>username_1: Among the conferences I'm familiar with (in Electrical Engineering), most will require some sort of original material to be presented. The conference you have already presented a paper at seems to be an anomaly rather than the norm. In the current case, and in general, what you are thinking of seems to be very similar to a double submission.
For your first point on publishing between abstract submission and the conference, if both submissions are very similar, this would throw up red flags. Would the organizers find out? I'm not sure, but if both papers are accepted, and they are very similar, then someone will find out in the future and that won't help you.
For your second option on publishing *after* the conference, there are journals that will allow you to submit material previously presented at a conference, with the caveat that this must include original material building on your previous work as well.
Simply put, submitting the same paper (or largely similar papers) to two different places is a no-no, but if the second paper has promising results building on the first, that's good enough. Personally, your second option sounds better to me if this is the case.
Upvotes: 2 [selected_answer]<issue_comment>username_2: If the conference is to publish proceedings or extended abstracts, then there could be copyright issues if substantial portions of the material has been published elsewhere. In such cases, you cannot submit material that the conference would like to publish.
In the other direction, resubmitting substantial portions of material from a conference contribution for publication to a journal might create problems for the journal as the publisher of the conference proceedings will hold the copyright on that contents.
The solution is simply not to contribute to the proceedings, although this is not always possible (or even desirable if the conference is prestigious) when submitting material to some conferences.
Upvotes: -1
|
2021/07/29
| 656
| 2,640
|
<issue_start>username_0: I will graduate from my Master's programme in a week. I did exceptionally well and landed a TA position within the same department. I am truly humbled by the faith that my professors have put in me but I am a little anxious now.
I am 24 years old and I already know that most of the students I will be teaching are older than me. Moreover, I have already met a few of them earlier this year (while I was still a student). That is, they know me from the social and informal context of our student-mixer evening spent together. I am wondering how these two points put together would affect my authority in the classroom. What can I do to make sure that all the students are aware that I am no longer just a student? I am definitely okay with an informal relationship with the students because that is the norm in the department but I also do want to be taken seriously. What should I do?
I am just afraid that the students will feel like I am just one of them. And that is also perhaps okay as long it doesn't undermine my authority.
Thank you in advance for any advice.<issue_comment>username_1: You are 24 years old right now. So, you are not too young to do “everything you want”. You might feel anxiety that is normal when we do something we haven’t done before.
Upvotes: 2 <issue_comment>username_2: I was a graduate student TA when I was 21 and looked 18. I would say I was not too young, and neither are you, but there can be issues. I learned the hard way.
You might want to do something like carrying a briefcase to make it clear you are in charge, and in a different role than the students. This sort of signaling can be gender and location specific so use your imagination: pants with sharp creases; shoulder pads; power ties; briefcases; drop the pitch of your voice like Thatcher did; carry only red pens. By about the second week of classes I swapped out a grungy backpack for a formal looking briefcase. I thudded it on the desk as I came in the room.
You are not obligated to do these things. You can demand to be treated professionally just because you walk in the room and say "I'm your TA" but sometimes it is just easier to use visual clues. Then you have more mental bandwidth to think about the course content and listen to questions.
I used this trick decades later when called to a state finance committee meeting. My bosses told me to talk as little as possible, so I went with a power suit and a bold tie. I probably should have carried a red pen and pretended I had just been grading.
Don't overdo it and to all these things. I do not wish to be another Dr. Frankenstein.
Upvotes: 0
|
2021/07/29
| 405
| 1,603
|
<issue_start>username_0: My institute requires us to publish our articles by ISI journals as the first priority, and Scopus as the second option. My question is: Is Web of Science the same as ISI?<issue_comment>username_1: Web of Science is a product. ISI was an organisation that used to produce that product.
The Institute of Scientific Information (ISI) was bought by <NAME> in 1992.
Web of Science is now produced by Clarivate Analytics, which bought the ISI intellectual property off Thomson in 2016.
Thus, in answer to your question, ISI as an organisation no longer exists, but when it did, being indexed by ISI probably meant being in the ISI Master Journal List (or in the Journal citation Report - JCR), which is a service that is part of the Web of Science website, and is now produced by Clarivate rather than ISI.
Upvotes: 5 [selected_answer]<issue_comment>username_2: *Is Web of Science the same as ISI?*
It depends on what you mean by ISI. If you mean "Institute for Scientific Information," then see the other answers here.
But ISI can also mean "[International Scientific Indexing](https://isindexing.com/)," which might look the same but isn't. A nice discussion can be found [here](https://predatory-publishing.com/when-is-isi-not-isi/), which says "the International Scientific Indexing should be treated with extreme caution and it is probably best just to ignore it as its impact factor has no meaning." (Many predatory journals claim to have an "ISI impact factor" but these are for the International Scientific Indexing, and not for the Web of Science.)
Upvotes: 3
|
2021/07/29
| 603
| 2,417
|
<issue_start>username_0: We are preparing an e-poster and a printed poster for poster presentation at a conference. The organizer define the dimension of the poster as follows (everything is prepared in .pptx format and then exported to .pdf):
* poster size in pixel: 1536px x 1080px
* poster size in cm: 54.2cm x 38.1cm
How is this not a contradiction? Both "sizes" have the same aspect ratio but the actual sizes differ. The px-size translates to roughly 40.64cm x 28.56cm which is significantly smaller. A quick google search revealed that indeed a lot of congress organizer ([like here](https://2020.epa-congress.org/instructions-for-e-poster-presentations-and-e-poster-viewing/) [or here](https://2019.eso-conference.org/2019/scientific-programme/e-poster-guidelines.html)) request the exact same dimension as stated above and I really wonder why? Isn't this something which will definitely lead to different printed poster sizes or am I missing something?<issue_comment>username_1: To convert from pixels to real world dimensions needs one more number: printing resolution.
Seems the conversion between the dimensions you read off is equivalent to "72 dpi", dots per inch. Your software seems to be set to 96 dpi instead. If you change the dpi setting of your graphics software to 72 dpi then the pixels/dimensions in cm will match.
I will say that a poster actually printed at this resolution will possibly look quite ugly. I'd build your poster at a higher resolution and downsample the digital version to comply. That's also a really really tiny poster, at least by standards in my field. Maybe these tiny posters are normal elsewhere though?
Upvotes: 4 [selected_answer]<issue_comment>username_2: To add to @BryanKrause’s excellent answer, the conference organizers are being somewhat sloppy in using the unit size “pixel”, which has no standard physical size, when they probably mean “[point](https://en.wikipedia.org/wiki/Point_(typography))” instead. A point in the printing world, when used as a unit, always measures exactly 1/72th of an inch as far as I’m aware.
As far as the discussion about printing resolution is concerned, you can print at whatever resolution suits your convenience (to the extent enabled by your hardware and software), as long as the physical poster dimensions are the required ones. So the issue of the printing quality doesn’t sound like a real concern here.
Upvotes: 3
|
2021/07/29
| 867
| 3,502
|
<issue_start>username_0: I have written a math paper myself, and wonder whether I have errors or not
(I have not found ones, but maybe there are).
Now I wish to send it to a math journal, and simultaneously send it to several experts in the field / previous mentors or colleagues,
since the process of refereeing may take a few months and I am curious now if my paper is correct or not (hopefully, the experts / mentors could inform me about an error in a few hours or days, at most).
Is it okay to send it both to a journal and experts / mentors?
**Edit:** Thank you very much for your answers/comments! I marked each as useful, but could not decide which I like best.<issue_comment>username_1: Pick the most appropriate of your mentors, write to ask if they are willing to look it over for you. Don't send it to "experts" you don't know, and don't send it to lots of people at once.
Wait until you hear back to submit to a journal.
Upvotes: 5 <issue_comment>username_2: You can do either, but don't do both simultaneously, and follow the [advice of username_1](https://academia.stackexchange.com/a/172757/75368).
If you need to send it to people you don't know, get a mentor/advisor/professor to ask on your behalf. A cold email will be ignored. And if you include a paper it might be deleted without opening it. But a mail from a colleague is harder to ignore and likely to be trusted.
Note that sending it to a journal is, in effect, sending it to a panel of experts - the reviewers.
Upvotes: 3 <issue_comment>username_3: This is probably OK. Journals generally have policies against dual submission ([example policy from Wiley](https://authorservices.wiley.com/ethics-guidelines/index.html#16)), but are also generally OK - even happy to encourage - the author sharing preprints.
The only caveat is if the mentors/experts find a fatal flaw in your manuscript, you will probably have to withdraw it from the journal.
Upvotes: 2 <issue_comment>username_4: Reading a paper and looking for subtle errors take considerably time.
It is hard to convince people to read your paper, even though they might be your coworkers. Convincing a stranger is even harder.
You might consider implementing parts of the paper, and checking statements by computer (depends on your field how difficult this is). This might be more value gained from time spent.
Upvotes: 1 <issue_comment>username_5: If you are confident in the work, writing, and meaning of the results, but just looking for someone to double check math, I would submit to the journal at the same time as asking colleagues (but perhaps not people you don't know).
The reasoning is, I consider these situations:
* its possible/likely there are no problems, and you just wait for a month (or more) to have a colleague say this.
* you do have a mistake but its not substantial and reviewers end up missing it. while they were reviewing you end up having someone take a look that finds it and you can fix before publication
* you do have a big mistake and reviewers end up missing it. you have enough time to withdraw
* reviewers end up finding the same mistake, and you had been working on correcting it, so when you have to revise you have already tackled a lot of the work
* if there are 3 reviewers, its possible/probable that they don't give the same feedback or requested changes, so adding a "4th reviewer" as in a colleague you know is not much different from getting different feedback and having to balance what to fix/correct
Upvotes: 1
|
2021/07/29
| 1,210
| 4,830
|
<issue_start>username_0: I am applying for a job in a public university in the United Kingdom. The job advertisement is for a Lecturer position. I already received an interview appointment.
However, I am considering negotiating for a Senior Lecturer position. as I am currently being employed by a university in another country on the highest end of the Lecturer grade and the overall annual salary of academics in this country is higher in comparison with how academics are paid in the United Kingdom.
I am not sure how to go about this. Should I mention this during an interview? Or should I mention this once I receive an offer? Or is there no way to negotiate at all? I am planning to leave my current employment merely because the job I am applying for seems to suit me better.<issue_comment>username_1: It is unlikely that you can negotiate for a senior position, however negotiating for grade is different to negotiating for salary. It is possible to be appointed as a lecturer at salary points higher than the advertised ones and these can be used in the negotiations, particularly when hiring someone from overseas or from industry where salary levels are not consistent with UK ones.
A lecturer position is in a "Band" (actually band 8) and those bands have salary points and there is overlap with the upper "points" of the lecturer with the lower "points" of the senior (a band 9).
Being senior is not just about money, it is also about duties and responsibilities and they have a vacancy for the duties and responsibilities for a lecturer, and that is why it has been advertised as such.
Are your negotiating for different duties or more money? You need to be clear on that.
---
A useful references are:
* <https://www.discoverphds.com/advice/after/lecturer-and-professor-salaries>
* [In which countries are academic salaries published?](https://academia.stackexchange.com/questions/26487/in-which-countries-are-academic-salaries-published)
* [Within the UK, how are roles corresponding to academic grades defined?](https://academia.stackexchange.com/questions/58495/within-the-uk-how-are-roles-corresponding-to-academic-grades-defined)
Upvotes: 1 <issue_comment>username_2: I can't do much harm to try to negotiate - deciding to appoint someone to a lectureship is a difficult decision,and if they have picked you, it is unlikely they will change their mind if you try to negotiate better conditions.
Particularly the Lecturer/Senior Lecturer split is general competency based, rather that need based - that is people are promoted to senior lecturer when they meet a certain set of performance criteria, rather than when they need someone to do an SL job.
So, at my institution, to be promoted you must demonstrate you are doing the job of a Senior Lecturer, and that is defined by meeting 6 of the following 10 criteria:
Research:
1. **Outputs**: At least 1 output regarded as "internationally leading" (think a paper top journal in your field, or a generalest glam-journal) in every 2 year rolling period.
2. **Income**: Average income over a multi year period that at least matches that of the average senior lectuer in a Russel Group university for your field.
3. **Impact**: Deliver research that forms a viable Research Excellence Impact case study, or potential future impact case study - patents, change government policy make an ecconomic or social difference.
Teaching:
1. **High Quality Teaching Practice**: Evidence that your teaching is good- outcome data, teaching evaluations etc.
2. **Curriculum enhancement**: Review and redesign an existing complete program of study, or design a new one (or a significant number of indevidual modules).
3. **Improving teaching practice**: Evidence of pedagogical scholarship, research and publication.
Leadership:
1. **Academic Citizenship**: SL level admin duties like admisions tutor, director of Equality, Diversity and Inclusion, chair of the academic misconduct committee, etc
2. **High Quality Management**: Not just undertaking admin for the department, but that you are managing others to achieve the aims of the department, faculty and univeristy.
3. **Change and Innovation**: Make admin stucutures or functions better.
**Professional Standing and wider engagement**: Basically show that your field (or the public) knows who you are - international conference invites, journal editorships, external examining, etc.
I supsect that if you can show that you meet 6 of these (and are prepared to continue with such duties), or whatever the equivalent criteria are at the univerity that is hiring you have, then you could argue that you should go in at SL level. The worst that can happen is they can say no.
Interestingly we pretty much never advertise for lecture level only, we generally advertise Lecturer or Senior Lecturer.
Upvotes: 0
|
2021/07/29
| 1,083
| 4,413
|
<issue_start>username_0: In mathematical and other theoretical fields, there is not much need for expensive equipment, so grant money is mostly used to fund students and research assistants. But in some fields, good students and assistants are very hard to find (for example, in computer science, the salary in the industry is so high, that it is nearly impossible to beat with grant money). In such cases, apparently the most efficient way to use the money is to reduce teaching load from the principal investigators (e.g. by paying external teachers to teach the basic courses), so that they can spend more time on their research. Is there any fund that allows to use the money in such a way?<issue_comment>username_1: I'm sure that this varies from country to country, but I think a lot of grants (and universities) will let a researcher "buy out" of some sorts of teaching using grant funds. Fewer will let you buy out of all teaching, since that would defeat one of the main missions of a university. But, buying out of an undergraduate class might be possible.
I'd guess that long term buy outs from teaching would be very rare, only for superstars, and not applicable to doctoral advising, or maybe even advanced specialty courses.
And, in some places, doctoral advising counts against the teaching load also, but not one student per class.
Also note that it isn't just up to the grant provider. The university also has to judge that this is a worthwhile tradeoff.
---
Here is a policy at one university:
<https://www.umass.edu/sbs/faculty/professional-resources/faculty-resources/course-reduction-policy>
And here is a program that doesn't rule out course reduction necessarily:
<https://www.nsf.gov/funding/pgm_summ.jsp?pims_id=505892>
---
Note, also, that there are other reasons a university might grant a reduced load than buyouts.
Upvotes: 4 [selected_answer]<issue_comment>username_2: It is entirely possible but (as you can imagine) such grants are not so easy to get. Basically, it depends on the rules of the grant.
There are some “non-traditional” grants that are designed for this: basically the amount of the grant pays for buyouts and the scholar must produce some report at the end of the grant periods. Some think tanks contract academics in this way.
It is not that uncommon for institutes to have core members with no defined teaching duties: they can teach courses (usually graduate courses) when they want on the topic of their choices. In physics the Perimeter Institute in Waterloo operates this way: its core members are affiliated with the university (and thus can supervise students) but do not operate as regular faculty members (although this situation is not strictly a grant, but a perk of appointment there).
There are also exceptional circumstances: endowed chairs for instance might come with full or partial teaching buyouts.
It goes without saying that such grants or positions are highly competitive.
Upvotes: 2 <issue_comment>username_3: Indeed this depends on the country. The answer is mostly that Europe has grants like this moreso than North America in my experience. I don't know about other continents so I will focus on what I know below:
In the USA, the way grants from the NSF often work is through providing summer salary. I haven't seen much there.
On the other hand, UK Research and Innovation grants usually pay for a certain percentage of the academic staff member's salary in order to pay for their time to execute the grant. Typically this number is somewhere around 20% for a standard grant for a permanent faculty member (so a day a week reserved for the grant). One prestigious style of UK grant is the fellowship. In the UK, fellowships will fund somewhere between 75-100% of the researcher's stipend, with some having explicit maximum requirements of time used off the grant (e.g., only x hours a week can be used on academic activities such as teaching and admin off the grant).
European Research Council Starting / Consolidator grants often fund sizeable portions of the PI's time as well. Advanced Grants less so as they expect it is harder for the PI to get out of certain obligations as a senior member of a department.
Upvotes: 2 <issue_comment>username_4: ERC grants allow to apply for funding for the PI him-/herself, and thus to buy out of teaching (of course, this requires the university to agree beforehand).
Upvotes: 2
|
2021/07/29
| 1,151
| 4,860
|
<issue_start>username_0: Basically the title, I reported cheating and he said, to summarize, "I don't have the time, and cheaters only harm themselves anyway, and they'll find out later down the road."
Anyway, I agree with him, I'm pretty busy and I don't feel like wasting my time reporting cheaters and compiling proof and documents etc., so I'm inclined not to care as well.
I'm just curious though if you were me would you do something or just turn a blind eye?<issue_comment>username_1: This, in a nutshell, comes down to who you actually want to be as a person and in life. Do you want to make your life easy, even if that means cutting corners? Or do you want to be a person who at the end of the day can say "I did the right thing"?
In other words, you're unlikely going to find answers on this forum for your specific question. Of course we all agree that cheating is wrong, as is driving substantially over the speed limit, or letting the cashier give you a $20 bill when it should have been a $10 bill. In the end, how you deal with these questions is who you are, and it will also be how others see you.
Upvotes: 3 <issue_comment>username_2: You reported it to the appropriate authority. Unless you have some authority or interest of your own, you've acted ethically even if you do no more.
Doing more has risks. You will use time and energy. Your facts and motives may be questioned. The professor won't be pleased. You may be ostracized. Retaliation isn't unthinkable.
Remember this incident and think about it if you end up in a position where you may receive this kind of information.
Upvotes: 4 [selected_answer]<issue_comment>username_3: ### Follow university policy.
Your university almost definitely has a policy on how cheaters are to be handled. It might be that cheaters are simply awarded a grade of 0 for that assignment, it might be that they automatically fail that class it might be that they're expelled from the degree entirely. There might be some degree of leniency for first time offenders, or there might not be. It might be left to the discretion of the professor, or there might be an academic dishonesty tribunal that is responsible for ensuring that all academic dishonesty cases recipient due process.
In any case, you should follow the rules and procedures given by your university to the letter; if they contradict what your professor has told you, then you should inform them so that they won't be surprised, but then follow the procedures anyway.
Upvotes: 0 <issue_comment>username_4: I would disagree with your professor. Cheaters don't just harm themselves. They harm other students. Consider that there are a finite number of seats for university admission, and a finite number of scholarships and jobs which are decided upon by merit by way of grades.
Cheaters essentially jump ahead in rank above others that worked hard for their grades. This pushes the honest students further down the rank. Arguably, the worse hit students are those honest students at the bottom of the rank who are completely displaced from their opportunities. Those students (even though we and they don't know which) should be reasonably protected from losing opportunities they rightfully earned.
There are many other issues with cheating including changing the moral norm. Consider that cheating by faking a foul in basketball is just part of the game. Does that mean it's ok? Or consider that some couples cheat on each other thinking that it's ok as long as they don't get caught. I've met many people who think this way nowadays. Personally, I'd like children to grow up in a society where this does not become the norm.
An increasing minority of students feel that cheating in academia is ok if you don't get caught. They view academia as a game with referees (instructors) that determine fairness. If the referee doesn't do anything, the game will not be fair. I fully understand that we can't catch all cheaters but we should try to do what we can to minimize the problem.
I would say that it makes the job of an instructor extra burdensome and in many cases untenable when the administration is not on board as well. Maybe your professor is in this situation but doesn't explicitly state it. It's been rather difficult to deal with at my school. I do what I can and I know other instructors that do as well, however, I don't see much support from administration. I see the problem getting much worse.
Access to information is continuing to get easier as resources and technology evolve. I don't see how universities can solve the cheating problem or slow its increasing use without instituting some standard of academic integrity through proctoring of some kind so that we can verify identities and work. I'm not sure how this can be done or if it's even possible.
Just my opinion. I hope it worked out.
Upvotes: 1
|
2021/07/30
| 769
| 3,339
|
<issue_start>username_0: I met a Chinese colleague in a conference, where we identified a research topic of common interest. So, we agreed on collaborating on a project and started exchanging some emails about the project, where we came up with a clearly defined project and preliminary results (including R codes and statistical models). This happened in the course of around 3 months after he went back to his country.
Then, he started taking long time to reply to my emails, claiming he had internet access issues until one day (at around 6 months after we met) he stopped replying. So, I just thought he was not interested in the paper anymore. However, around 2 years later, I saw the paper published in a top biostatistical journal. At first, I was dumbfounded to see it was exactly the same idea, same formulation, just with the remaining bits and pieces completed. Two more authors were included in that paper. I emailed my colleague to "check if he wanted to continue our collaboration" but got no reply anymore.
I went on a rollercoaster of emotions, thinking of emailing the Editors (I have emails and drafts and code), but then I decided to leave it to Higher Powers in Life.
**My question** is, in general, what is the Ethical thing to do when you find your colleague publishes a paper on something you contributed, not only to formulate, but also to develop.<issue_comment>username_1: You are facing two different possibilities.
One option is where you put the incident aside and move on.
The other option is relevant when you have evidence that your work was used to generate the publication without an appropriate acknowledgement. The three legal levels of proof of misconduct are presented [at this link](https://www.hg.org/legal-articles/different-standards-of-proof-6363). Start at the lowest level (preponderance of evidence). You might for example be able to determine, on hand your previous notes, that preliminary results or source code from your email exchanges appear in the publication. If so, you can decide to prepare to contact the journal editor and make your case to be included in an errata, listed for example as co-author. If you take this choice, you may want to present your case first fully to a trusted colleague at a higher (more senior) level to assure that you remain objective in your statements and that your arguments are logically sound, doing so before you write to the editor of the journal.
Whichever path you choose, you may take an underlying lesson: When you put forward an idea and, for whatever reason, do not lead it diligently to its end, you may loose it. Your final response has to be divorced from the emotions of the loss, even though emotions are appropriate from this incident. I too would feel rather betrayed either by my own inherent trust in the good-will of others and/or by the behind-the-back stab of a clever thief.
In summary, by taking the second approach, you will pursue a somewhat legal path when you can find evidence (and self-confidence) to support it. By taking the first first approach, the words of a friend of mine might strike home ... "Oh well. It seems I lost that one. Next!"
Upvotes: 2 <issue_comment>username_2: You can report this problem to the editor of the journal, and send the relevant email with the corresponding information.
Upvotes: 0
|
2021/07/30
| 1,512
| 6,560
|
<issue_start>username_0: I've stumbled upon various professors (R1 universities in CS) having almost no first-author publications for multiple years while directing (or co-directing) their research groups/labs.
From the perspective of a PhD applicant, this might mean two things:
* They spend so much time and effort advising their PhD students that they do not have time for their own first-author research. *This is a good thing*. (Their students' research is also the professor's research in a way though.)
* They are either unproductive, or spending most of their time doing things outside academia (consulting, starting companies etc.).
I was wondering if this was normal, and whether there were other possible explanations for this type of a publishing record.
In other words, should this be a "red flag" when looking for potential PhD advisors?
Edit: This is not a theory-focused field. So the author names are ordered based on their contributions.<issue_comment>username_1: I suggest not putting too much emphasis on this one factor. The reason is that there are just about as many reasons for this as there are associate professors. You've named a couple of them. Let me add a few more.
Some recently tenured faculty need a break, especially if their path to tenure was especially arduous.
Some recently tenured faculty want to change their research focus now that they have the freedom to do so. This takes a while and may seem unproductive from the outside.
Some people contribute collaboratively to the work of others and are happy to let the others take the major part of the credit, especially since their own position is now secure.
In some collaborations it is difficult to say who was the "major" contributor. As you note, some professors are happy to let their students take the lead.
Some professors just aren't enamored of the "paper chase" and are happy to do research for its own rewards.
But if they are leading a lab, and the lab as a whole is productive, they might be ideal advisors. They aren't as likely to step in front of you at the last moment claiming first authorship. I'll guess a lot of "first authorship" by professors is demanded rather than earned. I'm relying for that on questions asked on this site, rather than personal experience, however.
I contributed quite a lot to a lot of the projects of others and was happy to do so. I was secure in my position and didn't need any citation count heat index confirmation of that. The work was good and made contributions. I doubt that in most of it anyone really thought about authorship order (one exception where it was very clear - to all of us who had the lead). In particular, I don't appear on the dissertations/publications of my doctoral students (other than an ack), nor did my advisor ever consider joining me in authorship on my work, though he did contribute to it.
In choosing an advisor, think more about who can be helpful in the short and long term. Your reputation is your own, it doesn't derive from that of your advisor. Ask around (other students) to see what sort of support you are likely to get.
---
I had a friend on the faculty (tenured) while I was a grad student. He predicted at the time that he would never make full professor. It turned out to be too pessimistic.
Upvotes: 4 <issue_comment>username_2: The authorship standard in my field (neuroscience/biomedical) is that the first author is the person who primarily did the work, often a graduate student or post doc; the *last* author is the person who supervised the work, often a professor. Middle authors are seen as having more modest contributions, but their actual impact varies quite a bit and it's impossible to judge just from the author list. In this scheme, you would be better off judging a professor's productivity by considering the number and significance of *last author* publications they have.
CS has varying authorship conventions (in theoretical areas closer to math they tend to follow the math convention of alphabetical authorship, for example), but my understanding is that in at least some applied areas of CS they follow this same convention. It would actually be pretty weird for a professor to have a large percentage of first author publications in my field, it would suggest they are not letting their students get proper credit for the work or are not advising students.
I would not, however, characterize this as "they do not have time for their own first-author research"; rather, their role in their own research has changed and now their own research includes working with students. You don't need to be the first author for it to be *your research*.
Upvotes: 6 [selected_answer]<issue_comment>username_3: Maybe there are sufficiently confident in themselves and their work not to need the recognition that comes from being 1st author. I will certainly give authorship priority to my students whenever it’s reasonable: my contributions tend to be more at the conceptual than the calculational level, and a 1st-author paper will have more i pact on their CV than mine.
Upvotes: 2 <issue_comment>username_4: **I think it is also a matter of attitude and personal choice.**
To give you and example, both my PhD and postdoc advisors are well-established with about the same number of publications per year but have a very different publication profile:
1. My postdoc advisor has no first-author papers from the last 5 years or so: they have lots of students and prefer to supervise lots of projects. Most of their papers are as senior author.
2. My PhD advisor has less students and prefers to also pursue projects that are interesting to them resulting in first author papers (about one per year) and some last author papers.
From a student's perspective, it is a very different experience:
1. My postdoc advisor's group is large, hierarchically structured and research lines and projects are very well-defined. Lots of hands-on supervision and help from the supervisor. The PhD students don't have to stress about what to do, it is quite clear to them.
2. My PhD advisor's group was smaller with much more interaction and collaboration. Students were expected to be more independent and responsible and propose projects. Students could always ask for help or advice but one had to ask.
Which kind of supervisor is better, depends on the student. So, to answer your question, the lack of first-author papers is not an indication of a better/worse supervisor but it ***could be*** an indication of a different type of supervisor.
Upvotes: 2
|
2021/07/30
| 774
| 3,333
|
<issue_start>username_0: In the ML word, it's a common practice to upload a paper to Arxiv when it is still under review. But is it okay to share it with other people?
Let's say someone writes a paper for congress. And while the paper is under review, he decides to share it with other researchers of the area.
Is it ethically fair to do that?<issue_comment>username_1: Until you give up copyright you can share with anyone you like. Even after, informal sharing is unlikely to raise the hackles of the new copyright holder. They will often give you a license for such things, in fact.
But if it is publicly available on arXiv, just point them to it there.
Upvotes: 2 <issue_comment>username_2: It's common and ethically just fine to share work with *specific* others at any stage of research, even prior to the advent of preprint archives/in fields where preprints remain uncommon. It's your work. You can share it to get feedback, to inform others of work you've already done so that they don't waste their efforts, etc.
I would not recommend sharing work unsolicited broadly to people you don't know. That's less about ethics and more about politeness around unsolicited communication. There may be cases where you want a specific individual, preferably someone you have already met, to give feedback on your work; then it's fine to share with them (you may want to ask first something like: "I am working on XYZ, would you be willing to have a look at it and offer your thoughts?"). If you want to share work broadly, however, the way to do it is *with a preprint*. It's fine to point people to that preprint through other means, like on a social media account, but don't spam them; that's not unethical, but it is *rude*.
Exceptions: patentable work is legally complicated. Don't share such work without understanding the legal implications. Additionally, if you are under any additional contract terms for the work you're doing make sure you are aware of those terms, like if you are being funded by a private company.
Upvotes: 4 [selected_answer]<issue_comment>username_3: I won't comment on the ethics, but I want to point out that by submitting your paper to a conference, you agreed to some terms, and it would be a good idea to check those terms. Those terms may have constraints on what else you can do with the paper, either for copyright reasons, or more interestingly to ensure some anonymity in double-blind reviewing.
Having a copy of your paper on arXiv makes it quite easy for a reviewer to uncover your identity. Some conferences with double-blind reviewing thus forbid anything that can help identify you (including uploading to arXiv) until the end of the review period (at least). Some allow anything and trust reviewers not to go looking for the information. And others have some intermediate policy, for instance allowing a preprint on arXiv only if it was uploaded long enough in advance (so it doesn't show up on news feeds during the review period?), or forbidding to give a public seminar about the paper (this could also be to ensure that the conference gets some kind of exclusivity, is the first place where the paper is presented, but I have seen it argued as a way to preserve anonymity). If you personally send your paper to some people and one of them happens to be a reviewer...
Upvotes: 1
|
2021/07/30
| 1,458
| 6,853
|
<issue_start>username_0: The best way to prove that a file F existed before time t and has not been altered since then would be to post the cryptographic hash h=H(F) of the file F on the Bitcoin blockchain. When the cryptographic hash h=H(F) of the file F has been posted on a blockchain, we say that the file F has been *timestamped*. Since the cryptographic hash h cannot be changed without altering the file F, and since it will be exceedingly difficult to change the block containing h on the Bitcoin blockchain after time t, one should consider the hash h posted on the Bitcoin blockchain as a proof that the file F existed before time t and has not been altered after time t.
Should academic institutions require faculty to post timestamps of all their students assignments on blockchains? My idea is for the academic institution to require faculty to post a timestamp of each assignment as soon as the assignment is turned in and also after the assignment has been graded. Timestamps are useful for promoting academic integrity. If posting timestamps for every assignment is too rigorous, then should academic institutions require faculty to post timestamps in a more limited context such as for finals, theses, or exit exams only?
I personally think that posting timestamps is a good idea, but there may be some unforeseen consequences to this proposal, so I am looking for reasons why academic institution should not require timestamps of assignments to be posted on blockchains.
**Advantages of secure timestamps:**
1. Timestamps are private. The timestamp h by itself gives no information about the file F being timestamped.
2. Timestamps are inexpensive. The transaction fee for posting something on the Bitcoin blockchain is currently between 2 and 3 dollars. However, it is just as effective to post the timestamps on some other blockchain like the Litecoin blockchain with lower transaction fees since the hash of the Litecoin blockchain will be periodically posted on the Bitcoin blockchain. One can reduce the fees much further by bundling many timestamps together into one transaction using Merkle roots. By using Merkle roots this way, one can post an arbitrary amount of timestamps on the blockchain using only one transaction without compromising the privacy of the data being timestamped. The cost of posting timestamps on blockchains should therefore be negligible.
3. Timestamps are censorship resistant. Since people will not be able to know the information being timestamped in the first place, it is not possible for people to censor the information being timestamped in any way.
**Why secure timestamps are needed for assignments:**
Timestamping assignments will create an immutable paper trail that will make it easier for instructors to maintain academic integrity without giving into the students' demands. Consider the following scenarios:
1. Suppose that students are given an examination, but at the end of class certain students beg for extra time and are quite adamant about needing extra time. If the assignments were not timestamped after being turned in, then the instructor would be more likely to give in and give the few students extra time. If the assignments were timestamped on a blockchain, then the instructor would not be able to give certain students extra time without creating verifiable a paper trail. Similarly, if students ask for an extension for an assignment that is already past due, then such an extension could not be given without creating a timestamped paper trail.
2. Suppose that a student asks to have a grade for an assignment changed. If the assignment was graded incorrectly, then the grader can change the grade and submit a new timestamp that has a memo stating that the grade was changed as well as the reason why the grade was changed. The same protocol applies in case the task was poorly worded or something that the students did not cover in class. On the other hand, if the assignment was graded correctly but the student simply wants points for an incorrect answer (or more partial credit), then it would be hard to write a timestamped memo that justifies the grade change.
3. Suppose that some students obtain a copy of the examination before the examination was given. Then students could themselves post a timestamp of this examination before it is given, and this timestamp will prove that students have obtained a copy of the examination early.
There are other scenarios in which secure timestamps would be helpful for ensuring honesty and fairness for everyone. It therefore seems as if timestamping assignments will be a simple solution (though not a perfect solution) for several of these issues.
**Possible pitfalls:**
Here are a few reasons why it may be a bad idea to post timestamps of assignments on blockchains, but these reasons are unconvincing to me.
1. Hacks could compromise student's privacy. This is a problem, but I am not convinced that this problem is bad enough for the institutions to abandon the practice of posting timestamps on blockchains. A better solution would be to simply have better cybersecurity.
2. Posting the timestamps on blockchains will take a little bit of work. I think it is worth the small amount of work in order to help maintain academic integrity.
3. Institutions may think that they already have enough academic integrity and that this proposal is unnecessary. I am not convinced that this is the case.<issue_comment>username_1: There is existing software (e.g. moodle, gradescope, etc) that can handle timestamps without making people setup a bitcoin wallet, buy bitcoin, and make non-standard transactions ('posting').
Upvotes: 3 <issue_comment>username_2: I suspect that the main misunderstanding in this question is the assumption that what the student handed in has to change when the grade changes. Changing the material after it has been handed in would be clear fraud and there is no reason for lecturer to go along with that unless there is real bribery. This is pretty rare, mainly because students are way too poor to offer a bribe big enough to compensate for the risk the lecturer is running.
However, it is quite common for grades to change. There is always a bit of room for interpretation, especially when it comes to partial points. Moreover, the key can be adjusted. I do that when I find out that a substantial portion of students misunderstood the question, so the answers to that question no longer measure what I want to test.
So changing to content after it is handed in is not an issue. We don't need some machinery to prevent us from doing that, because too rare and thus a non problem. It won't prevent fraud, because there plenty of other ways to change the grade. If I were planning fraud (and I am not) I would use those other methods anyhow because they are just easier.
Upvotes: 2
|
2021/07/30
| 936
| 3,785
|
<issue_start>username_0: In my home country, it is common for people to work at a research lab/group with titles such as "Research Assistant" after their Masters before their PhD.
Do such positions exist in the UK, and do they pose any visa troubles for international students?
My field is computer science and statistics.<issue_comment>username_1: There exists research positions in the UK for people with an MSc but no PhD, which are not necessarily meant for acquiring a PhD while employed. Such positions are rare however, and definitely not a natural stepping stone on the way to a PhD.
I suspect that the pay of these positions will be insufficient by itself to make you count as "highly skilled worker". Moreover, most of these positions I have seen in CS are very short term (eg 6 months), and it would surprise me if universities would be overly keen to go through the visa process for this.
While PhD funding in the UK is not particularly accessible for foreigners either, I'd recommend applying for a PhD in the UK, or doing a PhD elsewhere and then coming to the UK as postdoc/lecturer if your goal is an academic position in the UK.
Upvotes: 1 <issue_comment>username_2: In my field - broadly molecular biology - such positions do exist, but they are no where near as common as Postdoc (officially known as Postdoctoral Research Assistants) positions. Part of the reason for this is that grant holders here are very constrained in what they are allowed to spend grant money on. Most grants will have postdoc listed, and so a pre-doctoral assistant or technician is seen as a bonus position. But our department will high one or two of these each year, probably on 1 or 2 year contracts.
I would broaden your scope to take in research technician positions as well. At least in my field, techinicans and pre-doctoral research assistants are treated more or less interchangably - the only difference being that research assistants are planning to do this for a couple of years before doing a PhD, while technicians are regarded as more long term things (althgouh contracts are generally the same in both cases - the length of the grant).
~~As has been pointed out, the salary for the such positions will not be high enough to qualify you for the "highly skilled worker" visa, so you'd have to find a different visa route. One thing to do is to check if your field is listed on the shortage occupation list.~~
EDIT: I was wrong, the rules have recently changed. The criteria for the skilled worker visa can be different if:
>
> You can be paid between 70% and 90% of the usual going rate for your job if your salary is at least £20,480 per year and you meet one of the following criteria:
>
>
> * your job is in a shortage occupation
> * you’re under 26, studying or a recent graduate, or in professional training
> * you have a science, technology, engineering or maths (STEM) PhD level qualification that’s relevant to your job (if you have a relevant PhD level qualification in any other subject your salary must be at least £23,040)
> * you have a postdoctoral position in science or higher education
>
>
>
If you just finished your master's degree in the UK, you are eligable to work here for 2 years anyway. Or you'd be eligable if you accompanied a partner or other family member who had a visa.
Its also no longer true that PhD funding is not accessible to international students in the UK - the rules changed this year so that each government funded PhD program can take upto 20% of their students from overseas. However, the government program will only pay the home fee rate. So either the university must agree to waive the increased overseas tuition fee (many are doing this), or the student has to find a way to cover the difference.
Upvotes: 0
|
2021/07/30
| 930
| 3,970
|
<issue_start>username_0: Often I've read here about people applying to PhD "positions" in the UK.
In my search for PhD positions (CS & Statistics), I've seen:
* Generic PhD programs at research groups
* Generic PhD programs at departments, with a note recommending the applicant to first contact a supervisor
* Generic Centre for Doctoral Training (CDT) programs
* Studentships with a specified project, attached funding that then gets pooled to a number of supervisors
* Studentships with a specified project, attached funding and supervisor
The term "PhD position" seems to mean the last one, but those are a very small portion of the number of programs I've found, so I'm confused why those are often discussed as the main thing. Am I missing a significant number of opportunities in my search?<issue_comment>username_1: All of those are PhD positions.
The concept of a "PhD program" (or programme) does not really exist in the UK in the same way as it does in the USA. PhD degrees in the UK do not involve any coursework or exams -- you start doing research from day one, so there is no mandatory or planned *programme* of study which all students follow together. This means that PhD positions/studentships/degrees are never usually referred to as PhD programmes.
Every one of the permutations you describe could be called a PhD position -- that's just a generic term which means you will be working towards a doctoral degree in that role. It's not useful or necessary to draw the distinctions which you have -- the funding and duration of the degree will almost certainly be near-identical (around £15k per year tax-free stipend for 3.5 to 4 years, plus London weighting if you're in London, perhaps an extra £2-3k pa) no matter which of the various options you go with. When referring to the funding specifically, the term *studentship* can be used, e.g. "Your studentship will be paid on the first Monday of every month" or "You have been awarded a studentship for four years of study towards the degree of PhD".
The main distinction between these options is just where the funding for the position is coming from: the supervisor's own grant (perhaps if they have an ERC or similar), the allocated budget from the research council (i.e. the government), funding via a CDT, or the university's own PhD bursaries.
All would be correctly referred to as PhD positions and you would be called a PhD student or post-graduate research student (PGRS) while doing one. The latter term is often favoured by university admin as it's a catch-all for Master's and PhD students.
Upvotes: 1 <issue_comment>username_2: Admission to a "generic" PhD programme in the UK will generally NOT come with funding, but instead the obligation to pay tuition fees (which in particular for international candidates can be very hefty). As such, this only really makes sense if you can afford it, or if you have already obtained funding from elsewhere.
Centres for Doctoral Training (CDT) are very fashionable in the UK at the moment, and many of the grant funding bodies (eg EPSRC) no longer fund PhD students as part of individual research grants. This is rather UK-specific and somewhat recent development, hence advice written for eg Europe in general, or 1-2 decades old, may emphasize individual projects over CDTs.
Of the remaining individual-project-based PhD opportunities in CS, a sizable amount will involve industry co-sponsors. As setting such a project is a signficant effort, I would suspect that potential supervisors will often engage in it only if they already have an outstanding candidate at hand. Other opportunities originate in "left-over" research funding, which may require them to be filled very quickly. Both aspects can lead to individual projects being represented even less amongst advertisements than amongst actually funded PhD students.
Nothing in your question makes me assume that you are overlooking significant funding opportunities.
Upvotes: 0
|
2021/07/30
| 312
| 1,247
|
<issue_start>username_0: I was thinking of thanking the editor for considering the paper and I look forward to hearing reviewer’s comments making changes if need be. Does this seem unprofessional or strange?<issue_comment>username_1: I'd suggest against it. This is their job. The note would be just noise. Save your thanks for the final disposition of your paper, if at all, or, more appropriately, for some action beyond the minimal requirements of their job.
It isn't *bad* to do it, just a distraction.
Upvotes: 4 <issue_comment>username_2: That is fine, just be brief:
>
> Dear [editor], I just received the notification that our manuscript been sent out for review. Please let me know if there is anything more we should do. Thank you, Sincerely, [corresponding author]
>
>
>
Thanking editor will not change anything or influence the process at all
Upvotes: -1 <issue_comment>username_3: No need to do this. I wouldn't necessarily say it rises to the level of unprofessionalness, but it is "noise".
It's fine to include a *brief* thank you just out of politeness if you have some other reason for correspondence, but I definitely wouldn't send an email simply to thank, and there is no correspondence needed at this time.
Upvotes: 5
|
2021/07/31
| 1,765
| 7,501
|
<issue_start>username_0: **Summary:** are PhD advisers obligated to help with technical questions / tasks that PhD student is trying to solve?
---
This question arose due to my confusion regarding the role of a Ph.D. supervisor.
I am aware that, nowadays, plenty of platforms are available for a researcher to **ask** their questions. But receiving a proper answer *mostly* depends on others' will. Suppose I opt for a stack exchange site to ask a question then the possibility of receiving a solution depends on the response by members of the community. If I ask on GitHub or email the author then also it is optional for them to answer my query.
Some tasks during Ph.D. are expected to be completed in a stipulated amount of time. There may be negative consequences if it takes so much time for a Ph.D. researcher to get her query solved.
It is optional for colleagues, teams (if any) to solve a query of a Ph.D. researcher. Although the Ph.D. researcher tries hard to get an answer by reading books and thinking much about the problem, my question is regarding the role of the supervisor in this context.
Is a supervisor bound to clarify the technical questions of her Ph.D. researcher?<issue_comment>username_1: No.
The role of supervisor is understood in a number of ways by a number of people, but supervisors are not bound to clarify technical questions. They should provide reasonable *guidance* (for instance by suggesting possible solutions paths, references to similar material, appropriate training on equipment, course selection).
Now *reasonable guidance* means different things to different people, mentoring takes many forms, and supervisors have different styles, but good supervisors can and will gauge the amount of guidance needed on an individual basis. Other supervisors may choose to let the students sort themselves out.
Upvotes: 5 <issue_comment>username_2: It is not in the best interests of students at any level to have all of their questions answered directly. Part of being a good teacher is the ability to help students learn to answer their own questions. This is especially true of research students as when they gain their doctorate they should expect to be working at a level where there is nobody to answer their technical doubts as they are at the cutting edge of their field. IMHO, you don't have to answer if you don't think it is in their best interests.
It's also quite possible at PhD level that the supervisor doesn't have the answer (and nor might anybody else), which is what makes research fun.
I view a PhD as being an apprenticeship to be researcher in some particular field of research, so the aim is to learn the skills required by working with (hopefully) a master. Learning how to address doubts or solve problems for yourself is part of the progression to a journeyman (journeyperson?) that can work on tasks independently. Your job is to get your apprentice to that stage by the time they finish. Sometimes that is by directly teaching them things (e.g. how to use a chisel), sometimes it is having them work with you on something, sometimes it is letting them make their own mistakes and helping them to learning from them. I am learning to make musical instruments as a hobby, and I learn more by thinking about things for myself and trying to work out what I am doing wrong than I would if my (truly excellent) teacher just demonstrated how to do everything.
Upvotes: 5 <issue_comment>username_3: I think there are some more general underlying questions in this, implied and necessarily precedent to the question asked. Answering them, answers the question.
1. **Is a supervisor bound to *entertain* every question a researcher might ask?**
I'd argue **no**.
I feel it's ethically fine for a supervisor to set *well-defined* boundaries. An extreme might be "no more than 10 questions a day, by email, before 10am. In each, show what reasonable effort you made to find the answer yourself" - I've laid down similar boundaries myself, albeit *far* less stringent, when mentoring large numbers of people.
It'd also be ethically fine for a supervisor to fail to respond to questions which were clearly asked to large numbers of people at once; were abusive or personal; were asked through a weird medium; were not on the topic of the research; or were otherwise in violation of commonly-implied social boundaries for expecting a question response.
So the supervisor is *not* ethically required to even *respond* to every question from a researcher.
2. **If an appropriate question is appropriately asked, is the supervisor bound to *respond*?**
I'd argue **yes**.
If the researcher has shown reasonable effort in their search for answers, and found none, then they've likely encountered a "blocker".
It's arguably the whole point of mentoring, to *hear and respond to* problems encountered by those we're helping: without such interactive guidance, we're just a YouTube video on legs.
Are there those who won't fulfill this ethical responsibility? Absolutely. There are crap supervisors in academia, just as there are crap managers in business. And unless there's some explicit rule in place in the institution you're working at to force them to take a greater interest in helping, there may not be a lot you can do about that.
3. **What *form* of response is the supervisor bound to give?**
Any of the following would be fine:
* "I don't know how to help you with that."
* A partial answer, acknowledging that it's only partial.
* A full and complete answer of the question, maybe even raising additional issues that might be relevant.
* Proposing a reframe of the question, if it seems based on a deeper knowledge failure. Arguably the best help a mentor can give is to see beyond simple questions to deeper issues.
* A pointer to likely sources of the answer.
* A suggestion of other avenues of inquiry or research approaches that might lead to an answer.
* ... and so would many other approaches.
And as others are said, *answering* every potential question is neither the only, nor the best response to every question anyway, since we're trying to teach people to fish, not give them a coupon for unlimited free fishsticks.
I suspect that's why the OP didn't say "answer", but rather "clarify".
**TL;DR:** For any reasonable question, asked reasonably, and within any explicit boundaries set by the supervisor, ***YES***, but only a *response*, not necessarily an *answer*. It's a mentor's role to give support and advice.
Upvotes: 3 <issue_comment>username_4: Your supervisor is **usually** someone fluent in the research field you are doing your PhD in.
But it does not have to be the case: my supervisor was maybe 40% competent in my research (a novel one at that time) and I was not expecting reseach help from him.
He was a wonderful supervisor, though: he helped me to navigate the muddy waters of Academia, was keen to listen and to challenge some of the points, and was very helpful in the logistics of my PhD.
We had an extraordinary relationship but
>
> Is a supervisor **bound** to clarify the technical questions of her Ph.D. researcher?
>
>
>
Certainly not. Besides the slightly demanding "bound" (or at least this is how I read it, as a non-English speaker) they may simply not know the answer (it is research after). I would say that if at some point you do not discover the answers to your questions on your own your PhD is not that great (in terms of learning how to do research)
Upvotes: 2
|
2021/07/31
| 863
| 3,685
|
<issue_start>username_0: I'm an undergraduate student and I'm about to start my applications to the grad schools. Some friends of mine recommended me to not apply to two programs offered by the same department at the same university, such as physics Ph.D. and physics master's degree, because this might make the evaluation committee feel like I'm not determined. I know the personal statement and proposal in two applications might be very different, but does it hurt to apply to both? (for the programs in the U.S. and U.K.)<issue_comment>username_1: I can't speak for every department, of course. But, at mine, it wouldn't hurt your chances at all. For one, we are humans who understand that you are motivated in various ways, in particular, to build a sense of safety. It would be illogical to penalize you for this act.
But, even more importantly, our admissions committees are different! Different people read your MS application from those reading your PhD application. We probably wouldn't even notice that you had applied to both.
Upvotes: 3 <issue_comment>username_2: In the US, if you want a career in academia, typically an undergraduate would apply (only) to doctoral programs. It is completely natural here. You might even get a masters along the way, but the "program" you are in is for the doctorate.
If you want a career in industry (and other than industrial research) you would probably want a master's instead. But in the US, the courses would probably be the same initially.
Occasionally, but I don't know how frequently, a person applying for a doctoral degree might be steered to a masters instead.
The problem, however, with your suggestion is that the Statement of Purpose that is probably required would need to be consistent between the two applications. If it isn't, then you'll have problems. But it seems hard to do it consistently for both a masters and doctorate.
Think about your career goals first.
Upvotes: 2 <issue_comment>username_3: Asking in general about all graduate programs,
**in the US**
* **it varies enough that one would do well to ask**
* **and discussing your interests with a department is often expected and will improve your application**
Different areas of study might have specific, standard answers about the meaning of different degrees such as Master's versus Ph.D. versus *Profession\_Name*.D. or D.*Profession\_Name* ,
but in the US,
you can set that aside in large part because it is not only normal but even expected that you discuss your interests with the department before applying.
Academic departments in most universities in the US will have a specific person for this initial contact, often their "Director of Graduate Studies"/"DGS".
It is normal to talk to this person about what programs are most aligned with your goals and your qualifications.
This is really the first step in the application process -
an informal interview where you answer these important questions of fit - including what to apply to -
and they get a first impression of you as an applicant.
In addition, if both sides see a good fit,
they should help you contact one or more of the most relevant faculty for your interests.
You would set up meetings with those faculty and see if you are specifically a good fit with them. . . one of them should then be your future advisor if accepted.
.
So this flow of meetings is very important,
especially if you don't already have any connection to the department.
If one were to simply apply to a department without having these meetings and making these connections,
it is perhaps quite likely you would be ignored in favor of someone who did.
Upvotes: 2
|
2021/07/31
| 843
| 3,577
|
<issue_start>username_0: I finished grad school and now I will probably be an adjunct but will apply for postdocs again. I want to talk about my research ideas with other people but I am worried that postdocs will work on them, and senior faculty with share the ideas with their own grad students who are looking for projects to work on. How can I share my ideas without this happening?<issue_comment>username_1: Basically you can't prevent it. Ideas are free to use. I assume that in "sharing" your ideas you are also hoping to get (and utilize) the "ideas" of others. This is fine.
But you don't need to be an "idea faucet". You can seek collaborations with people and share the key ideas with them. Collaborations are valuable at any stage of a career, but especially so early on since you don't have the support of an advisor anymore.
However, "stealing" ideas is fairly rare. People have their own ideas to work on. The world is there to be explored.
---
For completeness, it is necessary to acknowledge some kinds of "ideas" to avoid plagiarism charges.
Upvotes: 4 <issue_comment>username_2: If an idea is developed enough to run with it and write a paper, do so. Only share the key insights with your collaborators, if any.
Less well-formed ideas are unlikely to be worth stealing. In doing research, much of the hard work consists of taking projects from the idea stage to the ready-to-write-a-paper stage. For established researchers, ideas are cheap because they have more good ideas than they have time to pursue. For early-career researchers, good ideas for projects are not cheap, but everyone tries to get suggestions from as expert/senior a source as possible. So I agree with username_1 that the risk of anyone stealing your ideas, even good ones, is small.
Upvotes: 2 <issue_comment>username_3: There is the often stated claim that "ideas are cheap, execution is expensive" - that is not entirely true; it definitely depends on the situation. I therefore do not fully agree with my co-respondents that the risk of stealing ideas is small. It does definitively happen, rarer than one may fear, but more frequently than one may hope.
For instance, there are very simple ideas, relatively low-hanging fruits, which are comparatively easy to execute, but which have a large impact. A classic example is high-temperature conductivity where the core idea is so simple that the authors sent it to a lower-tier journal with a trusted editor and to the NYT rather than to the top journals where the short review turn-around time would have been sufficient to replicate and scoop the core insight. This is not a completely atypical example. So, it is definitely a good idea to be careful with the ideas you consider to be critical for your future.
**What to do?** Share ideas with people you trust and share different ideas with different groups of people. That way you do not put all your eggs in one basket.
That being said, do not be paranoid about ideas, that is not going to do you any good; you simply do not want to be taken advantage of. You can be generous with ideas you either do not have time to pursue or you do not consider essential for your career/development.
And sometimes, you just have to take the plunge and the risk; accept that you might fail to judge a collaborator correctly sometimes; insofar I agree with the other respondents, the times where it works out, will outnumber the cases where it doesn't, if you are a perceptive person.
Scientific friendships are built on trust. Many of them can last a lifetime.
Good luck!
Upvotes: 3
|
2021/07/31
| 920
| 4,000
|
<issue_start>username_0: Assuming the process isn't double-blind, do you often look at other work by an author out of curiosity, and do you think this should influence your decision on a paper?<issue_comment>username_1: Indirectly yes as I like to read or browse through some papers in the bibliography, which often contains papers by the submitting authors. This is pure intellectual curiosity, is independent of the review style (single or double blind): I often discover (or rediscover) unknown, unfamiliar or forgotten interesting papers.
I will also sometimes quickly read recent papers of the author (or group) to understand how the current submission is different from previous work. I did this more regularly in the past because, when I was associate editor, I was regularly floored by the amount of self-plagiarism and duplication in some of the submissions I had to handle (the journal had access to a specialized plagiarism detection software). Now I am more selective with my reviews and the journals I tend to review for usually check ahead of me for plagiarism, but I still do this on occasions.
Of course if I suspect (or detect) plagiarism it will influence my report.
Upvotes: 2 <issue_comment>username_2: One of your jobs as reviewer is to determine the novelty of the work. For that you should acquaint yourself with what was known prior to it (if you are not already very familiar with the field), both due to the work of the same author and of others. To that end it can be useful to browse the author's earlier work.
It would *not* be ethical to let your judgement of the author's earlier work influence your opinion on the quality of the work under review just following the "logic" that the previous work was "meh", so this can't be any good either, or that this is such a famous author that the work under review must surely be great.
Upvotes: 2 <issue_comment>username_3: >
> As a reviewer, is it appropriate to look up other work by an author and factor that into your decision?
>
>
>
**No, this is not appropriate.**
I would argue that this is one of the exact things double-blind review is supposed to prevent. In the context of double-blind reviewing I have heard the following scenarios / arguments, all of which I feel are invalid:
* *"If the authors have a history of writing bad papers, I should be more critical."* No, you shouldn't. You should judge the paper in front of you, not previous papers the authors have written.
* *"If the authors are more famous than me, I am nervous that my criticism may be wrong."* Uh oh. If you are unsure, don't criticize. If you *are* sure, criticize even if the authors are the most famous in your field. And don't worry, even the best researchers (and their students) get things wrong.
* *"If the authors have a history of academic misconduct, I should be more critical (e.g., not give them the benefit of doubt if data is not entirely provided)."* I understand the sentiment here, but I would argue that you are overstepping your role as reviewer once you start judging the trustworthiness of researchers as people, and then apply different standards to the manuscript based on your assessment. Given that much reviewing in the sciences in practice operates on a certain basis of trust, and given that we also know that a minority is abusing this trust, there is likely a level where we need to take prior behavior into account, but individual paper reviewers are almost certainly not the people who should be making these decisions.
* *"If the authors are presenting many similar papers (or many papers building on top of each other), we should rather reject to prevent salami slicing"*: Ultimately, if a paper would have sufficient contribution if written by somebody else, it should also have sufficient contribution if written by the same authors. Conversely, if the contribution is very minor in comparison to existing work it should be rejected independently of who exactly wrote said earlier work.
Upvotes: 3
|
2021/07/31
| 960
| 4,176
|
<issue_start>username_0: I'm going to interview for a teaching-focused position in a couple of weeks, and I was wondering what kind of questions to expect.
I know there are plenty of resources on what to expect in a standard tenure-track interview. However, all the ones I found are geared towards research-focused positions, where teaching is an afterthought at best.<issue_comment>username_1: Indirectly yes as I like to read or browse through some papers in the bibliography, which often contains papers by the submitting authors. This is pure intellectual curiosity, is independent of the review style (single or double blind): I often discover (or rediscover) unknown, unfamiliar or forgotten interesting papers.
I will also sometimes quickly read recent papers of the author (or group) to understand how the current submission is different from previous work. I did this more regularly in the past because, when I was associate editor, I was regularly floored by the amount of self-plagiarism and duplication in some of the submissions I had to handle (the journal had access to a specialized plagiarism detection software). Now I am more selective with my reviews and the journals I tend to review for usually check ahead of me for plagiarism, but I still do this on occasions.
Of course if I suspect (or detect) plagiarism it will influence my report.
Upvotes: 2 <issue_comment>username_2: One of your jobs as reviewer is to determine the novelty of the work. For that you should acquaint yourself with what was known prior to it (if you are not already very familiar with the field), both due to the work of the same author and of others. To that end it can be useful to browse the author's earlier work.
It would *not* be ethical to let your judgement of the author's earlier work influence your opinion on the quality of the work under review just following the "logic" that the previous work was "meh", so this can't be any good either, or that this is such a famous author that the work under review must surely be great.
Upvotes: 2 <issue_comment>username_3: >
> As a reviewer, is it appropriate to look up other work by an author and factor that into your decision?
>
>
>
**No, this is not appropriate.**
I would argue that this is one of the exact things double-blind review is supposed to prevent. In the context of double-blind reviewing I have heard the following scenarios / arguments, all of which I feel are invalid:
* *"If the authors have a history of writing bad papers, I should be more critical."* No, you shouldn't. You should judge the paper in front of you, not previous papers the authors have written.
* *"If the authors are more famous than me, I am nervous that my criticism may be wrong."* Uh oh. If you are unsure, don't criticize. If you *are* sure, criticize even if the authors are the most famous in your field. And don't worry, even the best researchers (and their students) get things wrong.
* *"If the authors have a history of academic misconduct, I should be more critical (e.g., not give them the benefit of doubt if data is not entirely provided)."* I understand the sentiment here, but I would argue that you are overstepping your role as reviewer once you start judging the trustworthiness of researchers as people, and then apply different standards to the manuscript based on your assessment. Given that much reviewing in the sciences in practice operates on a certain basis of trust, and given that we also know that a minority is abusing this trust, there is likely a level where we need to take prior behavior into account, but individual paper reviewers are almost certainly not the people who should be making these decisions.
* *"If the authors are presenting many similar papers (or many papers building on top of each other), we should rather reject to prevent salami slicing"*: Ultimately, if a paper would have sufficient contribution if written by somebody else, it should also have sufficient contribution if written by the same authors. Conversely, if the contribution is very minor in comparison to existing work it should be rejected independently of who exactly wrote said earlier work.
Upvotes: 3
|
2021/08/01
| 1,063
| 4,410
|
<issue_start>username_0: I have submitted a paper to a journal that says the time to first decision is generally X months, and that there are strict deadlines for reviewers to complete their review. After X months were up, I used the journal's internal submission system (Editorial Manager) to send a direct message to the editorial office, with no response. Two weeks passed.
This journal is by no means a top field journal, but it's a solid "good" journal. It's not terrible or predatory or anything like that. It's from the big 4 publishers.
Next, I searched the journal website and found an email address of a staff member to contact about the submissions and peer review process. This staff member appears to be a full-time employee of the publisher, rather than an academic. I emailed the staff member, and also got no response. This person's email address <EMAIL> is also the contact email under the "Contact Us" tab in Editorial Manager. One additional month passed.
I asked this question [Is it acceptable to directly email the Editor-in-Chief if the editorial staff are not responding?](https://academia.stackexchange.com/questions/171310/is-it-acceptable-to-directly-email-the-editor-in-chief-if-the-editorial-staff-ar) on this forum. I then emailed the Editor in Chief at his institutional email, which was not listed on the website. That was about 10 days ago and he has not responded. All my emails have been polite, asking for an update on the status of the manuscript, etc.
This has never happened that the journal/editorial staff have simply not responded to communication, especially twice. What should be my next step? Aside from the initial confirmation email (after submitting the manuscript), I have never heard from the journal at all. (Aside: when I press reply to that initial confirmation email, it would go to the same <EMAIL> staff member as mentioned previously.) For example, I have received no email that a particular editor or associate editor was handling my manuscript. Thus, I can see the list of editors on the website, but don't know any of them personally, and don't know which of them is handling my manuscript.
I can contact other staff members listed on the journal website (who deal with different matters such as author proofs after the paper is accepted) or I can try to contact the publisher directly. However, the publisher's website does state that for any request pertaining to a specific journal, consult the journal's website for whom to contact. So, what's my next step?<issue_comment>username_1: Given the track record of the journal so far, I'd say you're better off sending the original journal a message saying that you're withdrawing your submission, and starting over at another journal. You don't need to wait for an acknowledgement of your withdrawal (which might never come) from the first journal (answers to [this question](https://academia.stackexchange.com/questions/28583/do-i-need-to-wait-for-a-journal-to-consent-to-paper-withdrawal-before-submitti) say you should wait for confirmation, albeit in a very different scenario; I'd say waiting a week or so to be extra-courteous would be reasonable, but longer is unnecessary).
It sucks but that will probably serve you better than trying to fix the broken system at the current journal.
[This answer](https://academia.stackexchange.com/a/83555/73551) says
>
> Withdrawal because you think the review process is too slow might be OK after very many months without any communication or signs of progress ...
>
>
>
but if you are two months past the journal's average time to first decision, without the courtesy of a response (even "sorry, we're being slow") from anyone associated with the journal, it seems justified.
(Checking your junk/spam folder as @username_2 suggests is probably a good first step though.)
Upvotes: 2 <issue_comment>username_2: If you *really* want to publish in this journal, try giving them a phone call. You could call either the editor-in-chief or the publisher. If you go with the editor-in-chief and (s)he doesn't answer the phone, you could call the department receptionist.
Otherwise you could give up and withdraw the manuscript. In this case I would just write that if you don't receive a response within [time] then you are withdrawing, and follow through if you still don't receive a response.
Upvotes: 2
|
2021/08/01
| 1,945
| 7,896
|
<issue_start>username_0: Recently, an article by one of my colleagues had been accepted at an IEEE conference, and about a week ago, he presented it on the main track. He published a preprint of this paper (without final changes) in arXiv with the following copyright note in the PDF footer, which according to [IEEE rules and regulations](https://conferences.ieeeauthorcenter.ieee.org/get-published/post-your-paper/) it looks like permitted to have it on arXiv:
>
> Copyright © 20xx IEEE. Personal use of this material is permitted.
> Permission from IEEE must be obtained for all other uses, in any
> current or future media, including reprinting/republishing this
> material for advertising or promotional purposes, creating new
> collective works, for resale or redistribution to servers or lists, or
> reuse of any copyrighted component of this work in other works by
> sending a request to <EMAIL>.
>
>
>
Also, [according to this answer](https://academia.stackexchange.com/questions/16831/is-it-legal-to-upload-a-paper-to-arxiv-when-it-is-under-double-blind-review-for) and also [this one](https://academia.stackexchange.com/questions/47841/conditions-for-uploading-ieee-publications-to-arxiv?rq=1), "IEEE policy permits authors to post their articles to the preprint repository arXiv."
Now about an hour ago, he received an email from the conference chair as follows:
>
> IEEE crosscheck shows that the paper has been published at arxiv. This violates IEEE rules. Asked
> for confirmation within 24 hours to take down the paper from the web.
> If no confirmation is received, it is considered that the paper will
> be excluded from publication and submitted to IEEE Xplore
>
>
>
This sounds very strange to me. Did my colleague violate the IEEE rules and regulations in the first place by putting the paper in arXiv? Or did conference chairs perhaps miss the copyright notice (or the fact that the preprint is actually the original version of the paper) in the preprint version?
In your opinion, what would be the correct next move for my colleague in this case?
**Update 1**: The paper had not been reviewed under a double-blind review procedure.
**Update 2**: My colleague replied to the general chair with a complete explanation on how publishing a preprint in arXiv (with necessary copyright notes) is absolutely permitted based on IEEE rules and regulations. Unfortuently, the chair still believes it’s against the IEEE rules and regulations. He responded:
>
> Thanks for the clarification. Make sure your paper have withdrawn by arXiv.
>
>
>
**Update 3:** ArXiv rejected my colleague's request for withdrawing the paper! ArXiv replied:
>
> Please note that having a paper under review or newly published is not
> a sufficient reason for withdrawal, as previous version(s) will still
> be available to users.
>
>
> arXiv is an electronic repository for research papers, and announced
> papers are meant to be available in perpetuity. The license applied by
> the submitter to the work cannot be revoked.
>
>
> As a result, you request has been denied.
>
>
>
My colleague is now waiting for IEEE's response (contacted authors [at] ieee.org) to see what they can do to fix this issue.
**Final update and status:**
My colleague notified IEEE and they directly contacted the conference chair and asked them to publish the article in IEEE Xplore. The conference chair then sent another email to my colleague after this, something like "After receiving further clarification from IEEE, we will submit your paper to IEEE Xplore." In short: all done! Thank you @jakebeal and others for all your updates, concern, and time<issue_comment>username_1: That specific conference might have a stricter policy than that of IEEE (I don't know to what extent IEEE sponsorship implies adherence to certain policies). I'd thus do the following, in order:
1. Check what the specific conference website has to say about this, and if this contrast with the general IEEE policy.
2. Take down the arXiv submission, as requested, so that the article can be reviewed or accepted by the conference without further ado (this avoids damaging the submission while possibly fighting policies). Soon afterward, contact the conference chair communicating the action and asking for an explanation in view of the IEEE policies (without being confrontational, though).
3. If the conference chair's explanation does not arrive or is not satisfactory, contact IEEE to see if there is anything irregular according to their policies and agreements with the conference organizers.
Upvotes: 4 <issue_comment>username_2: I think that what you're seeing here is a conference chair who doesn't understand what arXiv is and is blindly applying the self-plagiarism policy.
The [IEEE's current official FAQ on author rights](https://www.ieee.org/content/dam/ieee-org/ieee/web/org/pubs/author_version_faq.pdf) has a question dedicated to arXiv that explicitly states that arXiv publication is permitted. Thus, there is no concern on that account.
The IEEE CrossCheck system, however, does not filter arXiv out of its web-crawling. Thus, if you've posted something to arXiv, CrossCheck will dutifully report that it has found a high similarity match to text found on arXiv. And this is where I think that things have gone wrong.
* What CrossCheck actually does is report *materials that should be examined* to see whether they are (self-)plagiarism or not.
* People who use CrossCheck sloppily, however, often don't bother with examining, and just assume that match = plagiarism.
Thus, I think your colleague is dealing with somebody who hasn't bothered to understand what CrossCheck found and is just reaching for a knee-jerk reject. I notice as well that the email you quote doesn't say how, exactly, the rules are being violated.
From this comes my recommendation for how to proceed. I would recommend writing back to the conference chair to ask for clarification, while pointing at the IEEE FAQ, something like:
>
> My apologies, but I am confused. The IEEE author rights FAQ explicitly allows for posting of preprints on arXiv, so I don't think that can be the violation of IEEE rules that you are referring to. I don't think this can be a self-plagiarism issue since arXiv isn't a peer-reviewed publication. Does this conference have a different policy, and if so, can you please point me to which aspect my preprint is violating?
>
>
>
Hopefully, this will either lead the conference chair to understand that they have made a mistake or else point you to the actual issue that they have. If they're a martinet or a fool, however, they may still just demand the thing gets taken down, in which case you can attempt to do the arXiv withdrawal process, making sure to point the blame at the conference in the comments field.
**Update based on responses from conference chair and arXiv:**
The uninformative and inflexible response from the conference chair does not affect my advice: I believe it simply means you are going down the "martinet or fool" path.
Since arXiv has (appropriately) refused to take the publication down, there is now no path left but confrontation with the program chair. IEEE *might* be able to help in time, but you're dealing with a large and often slow-moving organization there, so if there isn't a response from IEEE within a day or so, it's going to have to be dealing with the conference instead.
Here, it may be useful to involve other people besides just the program chair. Most well-established conferences have some sort of steering committee or similar that is specifically designed to help ensure consistent behavior despite the year-to-year change in organizers. Writing to them to ask for help with dealing with the failure of the specific chair that your colleague is dealing with may be useful.
Upvotes: 7 [selected_answer]
|
2021/08/01
| 2,171
| 9,259
|
<issue_start>username_0: *I currently not in academia, but I am an undergraduate alumnus with no graduate experience.*
What are the normal processes for academics to challenge each other’s theories or findings, or to support them.<issue_comment>username_1: The primary way to refute a theory is to do the research necessary and publish a paper (or papers) with a (more) correct theory. the primary way to support a theory is to (positively) cite it in future work that embraces it, especially to extend it.
The theory of the aether was refuted when Einstein published the theory of relativity. But it took years for confirmation of Einstein's ideas to become firm - it is still happening, actually.
Every paper that cites another is supporting the earlier work.
It isn't my field, but I've heard that the ideas of Freud are slowly being replaced in psychiatry.
This is the normal process of science, actually. Old ideas are replaced by newer and more supportable ideas. Think [Galileo](https://en.wikipedia.org/wiki/Galileo_Galilei).
---
Minor channels, are discussion groups and arguments at conferences. But the "refutations" need to be backed up, not just claimed.
Upvotes: 4 [selected_answer]<issue_comment>username_2: As stated in the answer by username_1, the major road to criticise published results is to publish something yourself. There is a problem with this, which is that the academic publication process is slow, and may be held up even further if supporters of the criticised work recommend rejection when they are reviewers of critical papers. In particular, reviewers and editors often demand a better original theory or results than the one criticised. This can be counterproductive, as this requires new research which may be more difficult and harder to do than correctly finding problems in somebody's work. There is a handfull of journals publishing plain but well justified criticism; the majority, according to my experience, will demand additional original work, which obviously is a problem if the criticised work is wrong and will be used and cited as if true in the meantime! (I have occasionally criticised related work as a side remark in a published paper the main aim of which wasn't to criticise that work.)
Another major avenue for criticism is open to only a few, which is being a peer reviewer. You need to be invited, so this is not a way to criticise whatever you find wrong, but rather to criticise something specific you were asked to assess. However, if you establish yourself in science, you will be asked to do more peer reviews than you can do, and also young and relatively unknown scientists are asked (because much peer review is needed), so being a peer reviewer isn't very exclusive, also it is often effective as you can ask the authors to repair certain issue or recommend to reject the paper, but of course it's not a path to criticise stuff you are not invited to review. (Occasionally journals also publish discussion papers where either a chosen view or everyone interested is invited to write discussions.)
It is important to have in mind that human psychology is important also in science, and hardly anybody likes to be criticised in public. One should think that scientifically minded people are actually happy to learn from having errors pointed out by others, but careers and funding depend on this, so it is often not that easy.
My first channel for criticism will always be to contact the criticised authors directly. Most journals allow the authors to publish correction notes if they themselves realise that something is wrong in their work. Also, rather than straight away accepting that they were wrong, they may be happy to collaborate and come out with something that corrects the original issue but can be sold as an extension or at least as inspired by their earlier work rather than just admitting openly that "we were wrong". Chances to have an effect are probably better before work is finally published in a journal, as long as it is still only on preprint servers such as arxiv, privately circulated, or presented in preliminary stages at conferences.
I have pointed out issues at conferences but mostly this didn't have much effect. Authors may defend their point, or even agree with me, but may not change it anyway. Maybe the odd author learnt something and changed their approach at least in their next work, but this is rather the exception than the rule. (Obviously we also need to take into account that when criticising others we may be wrong ourselves, or the issue may have potential for genuine controversy, so we shouldn't expect anyway that everyone criticised by us says "you are right and I was wrong".) Obviously one can always criticise work by talking to others, and quite a bit of this is going on, but it is hard to know whether and how much of an effect this has.
On top of that there are more or less well visited blogs and open forums where stuff is discussed, probably also things like facebook groups (which I don't do). Chances are for criticism in such places to have any effect, the blog needs to be popular and the writer a "big name". However there is a number of such places where it is at least possible to write criticism without going through the peer review process, and some of this is read and even responded to by the authors.
In my field, statistics, <NAME> has a very popular blog, which he uses to criticise work, and also to reflect about how difficult it is to criticise work effectively in science. Unfortunately it is very normal that criticised work continues to be cited and used and taken as "true". Searching for "criticism" and related terms on his blog brings up lots of stuff.
[Gelman's blog on Statistical Modeling, Causal Inference, and Social Science, example entry on "A ladder of responses to criticism, from the most responsible to the most destructive"](https://statmodeling.stat.columbia.edu/2019/01/18/ladder-responses-criticism-responsible-destructive/)
Upvotes: 2 <issue_comment>username_3: If the theory has not been published, usually one does not bother challenging it.
If the theory was published, most journals allow other authors to submit a "comment" or "matters arising" which can criticise it. This rarely occurs. Usually the comment is peer reviewed.
Upvotes: 2 <issue_comment>username_4: **Intro**
One way to look at a theory is to treat it like a model. Models are simplified versions of reality allowing analysis and making predictions about reality. However, no model or theory is perfect, thus the limitations of a model are as important as the model itself.
Some time ago, the theory of a flat earth was good enough. People did not travel very far, there was no need for precise time measurements, movement of celestial objects was explained with divine intervention. Indeed, if all you want is to measure your own small crop field, it does not matter if on a grand scale surface of the earth is not flat.
A globe is a model of the earth and shows that the shortest path from Europe to the US is actually over the north pole. Something that is not evident from a flat map. However, a globe is a really poor model to say anything about earth's composition, core, and atmosphere.
**Back to critiquing a theory.**
* You can present a better theory in some way outperforming the earlier one. However, there is nothing to say that the earlier theory was bad. Merely, your theory is slightly more complete. Round earth orbiting around the sun explains the day/night cycle, seasons, lunar and solar eclipses, and enables much more precise distance measurements over long distances. Thus it outperformed the flat earth theory.
* You can demonstrate the constraints of an existing theory, or identify a contradiction suggested by the theory. Here is a good example: <https://www.youtube.com/watch?v=d_XuFkVdAYU> Doing so invites more research into the area.
A good start to evaluate any theory, especially yours if you aim to propose one, is to use the 7 criteria:
* scope - what does the theory can predict and what it cannot?
* logical consistency - does it makes sense, is not self-contradicting?
* parsimony - is it simple and elegant? This is connected to the Occam's Razor, "The simplest explanation is almost always the correct one"
* utility - is it useful?
* testability - can we demonstrate it, and more importantly, can we disprove it? This is an important one in the light of fake news. There is no way to prove (or disprove) that the world is governed by a large pipe-smoking rabbit or lizard people. Luckily, this theory fails at logical consistency.
* heurism - does it provide a good basis for extension and further research?
* test of time - Do the above criteria hold true over time. Einstein's theory of relativity was criticized at the time. However, it turned out to be one of the most elegant and useful theories ever.
In academia, the accepted way to disseminate results is to publish them in journals and conferences proceedings.
Outside academia, you can make use of a new theory by creating a new product and bringing it to the market. It is somewhat common that companies do their own in-house research to advance their products and business.
Upvotes: 0
|
2021/08/01
| 2,048
| 8,427
|
<issue_start>username_0: For example, if I do a postdoc and then leave for a year and then decide I want to go back to academia, is it possible, or will researchers look at my file, say, "Oh, a year not in academia," and throw my application away? My experience in grad school is that academics are **very** social-status oriented and consider any non-academic job as "low-status." My own adviser once said, "I don't know why anyone would go into industry." I know grad students who wouldn't tell their advisor that they were considering non-academic jobs, because they were worried their advisor wouldn't take them seriously. Should I do whatever I can to stick to a very linear career path?
Edit: I'm in pure math but this question applies to all fields.<issue_comment>username_1: The difficulty is that academic positions are determined by your recent work. If you have left pure math academia then unless you have been publishing pure math papers on the side then you don't have any recent work to merit a new appointment.
Upvotes: 5 <issue_comment>username_2: No, it's not looked down on. It rarely happens, but that is because switching from a nonacademic job to an academic job is usually a poor economic choice.
Upvotes: 5 <issue_comment>username_3: Since the question applies across all fields...
There are many areas of engineering where the "state of the art" in academia is a ***long*** way behind industry. Industry has the context to make practical use of the technology, and where required, the resources to carry out the experimental work needed to push the boundaries.
Aviation is the obvious example here. If you're interested in any aspect of how to design a plane then academia is the place you learn how to do the basics, perhaps at most with a PhD, before you join Boeing, Airbus or wherever and start working with up-to-date technology. Staying in academia is a recipe for stagnation.
Speaking personally too, I currently work in nanopositioning. (FYI, a major application is focusing mechanisms, some for microscopes and some for particle accelerators. Also near-atomic resolution surface scanning.) We see a lot of academic papers stating as "fact" that nanopositioning systems can only be used for closed-loop positioning (reading back a position measurement and driving the actuator until it gets to where it wants to be) at speeds of up to 1-2% of the system bandwidth, and they get all excited about getting their speed up to 5% or maybe even 10% (usually with horrible effects on accuracy as a trade off). In fact the leaders in the field in industry have routinely been achieving 10% since the 1990s, and we're pushing 40% today. One of our current areas of interest is a feature of piezoelectric actuators which as far as we can tell hasn't even been spotted by academia yet, because academia focuses on slow movement or static positions, and this behaviour only happens when you're hammering it at high speed.
Upvotes: 4 <issue_comment>username_4: In general, my impression is that people overplay the idea that non-academic jobs carry some sort of stain to professors. Professors do like the idea of their ex-students staying in academia, but much more for practical reasons (continued collaborations, etc.) than because a student going into industry is seen as a failure of some sort.
It *is* objectively hard to switch from a non-academic into most academic roles. However, the reasons for this are similarly practical rather than ideological:
* If you have not published during your stint in industry, your CV may have "fallen behind".
* You may lack current references, or, more generally, connections in academia.
* Your salary expectations may not match up with academic realities.
* You may lack up-to-date "hive knowledge", e.g., how to write a good research or teaching statement, what to emphasize (and what to downplay) during interviews, or how to interpret a specific job posting, making it harder to write a strong application.
None of this is going to be a huge problem after only one year in industry, but taken together these factors make a return to academia increasingly unlikely the longer somebody stays in industry. You quickly reach a point where trading whatever career you have in industry for what you can realistically still get in academia is simply not attractive.
Upvotes: 5 <issue_comment>username_5: I know of quite a few examples of people who went into industry for several years after their Ph.D. in pure maths or after one or several postdocs, found the experience soul-crushing (albeit well paid), and successfully returned to academia, usually with renewed enthusiasm and ambition.
It is, however, not easy! These people are forced to compete with those who have been churning out papers in the meantime, learning new mathematics, developing new collaborations... Those who successfully return are usually quite brilliant.
Upvotes: 3 <issue_comment>username_6: *This is a story from the US, Electrical Engineering PhD*. It can happen. I applied for a professorship after several years in 'industry' and I had to **justify** going to industry in the first place. At the time I graduated there were about 800 applicants per tenure-track opening, so I couldn't get one. [800 may be a bit high of a count, but there were dozens of applicants per job]. I went into industry to earn a living. To a certain extent, such industrial experience is 'valued', but they also want to know that a person wants to be in academia. This can be a hard circle to square. In my case, I wound up going back to industry and similar organizational work.
Upvotes: 3 <issue_comment>username_7: I personally laid witness to my phd supervisor trashing another professor for going to a public sector/government position and then returning to the university faculty.
My phd supervisor is a small, petty person that would trash anyone at anytime for any reason (never to their face, of course). This is not specific to academia. Some people are just that way.
The best way for you to mitigate the worst in humanity --in any industry-- is to worry about your research/value first and foremost and hope that you are surrounding yourself with the people that are right for you. Personally, I would make exactly zero apologies for pursuing my own interest, but I dont fit in well in academia either :)
Your example of a postdoc-away-for-a-year would be much less subjected to your concerns, as this is generally a transitional period and not an established faculty member. Established academics would definitely encounter this more so than a recent postdoc.
edit: Heres the real question. Is your time away going to result in more funding or less? that is what actually matters.
Upvotes: 3 <issue_comment>username_8: Some of the best math in recent times was done by Mandelbrot on fractals while working in industry.
So this roughly corresponds to my current mission and trajectory:
* Academia has a bloated job market that underappreciates the talent
* Industry has a weakness in long-term thinking
*Solution:* Leave academia for industry to make both better
Upvotes: 1 <issue_comment>username_9: Some academics look down **on other academics** who look down on applicants who leave academia and try to go back.
First, looking down on someone, even for unrelated reasons, is not academic in itself, as **it distorts the meritocracy**. Looking down on someone for leaving academia could be seen as someone leaving the meritocracy. But that means coming back means entering this meritocracy again. It is about the actions, not the person.
In this situation there is a very real reason to actually look up to the person, because he has experience outside academia.
That is of value almost by definition.
The best case is real world experience of the application of related science.
**I have seen that as natural cause for genuine admiration.**
Independent of that, the person demonstrates strong long term interest in the topic.
**There is one problem in relation to hierarchy between scientists.** Like with any other occupation to, you need some time to get up to speed, for many reasons.
That is obvious to your peers, and expected. Technically speaking, your level of competence dropped, and you in a meritocracy.
So, in the end, academics may indeed look down to the person. But for different reasons in a different way you may have worried about.
Upvotes: 1
|
2021/08/01
| 1,897
| 7,850
|
<issue_start>username_0: I am doing my master's in Canada and my grant has already ended (7 months ago). I am paying my tuition fee and my expenses from my part-time job and my savings. My supervisor agreed that I should have my defense. We did all the paper work (this process took 2.5 months). We set up a poll system for a defense date and asked committee members to vote for their availability.
I created the poll system last week, it took them 3 days to vote, in the end, they did not agree on a date based on their availability. I noticed they pretend that they are very busy, and not willing to make some compromises.
My visa is going to expire, I have no grants from school, I do not want to pay extra tuition fees as I have been waiting for this process for 2.5 months. I need to have my defense soon, I have things to do for the next weeks. Their behavior makes me very stressed. How do I make them vote for the following week? They are very jealous people who pretend that they do not have 2 hours in the following week.
How should I tell these things in an email? since my prof sent them an email, they did not take it seriously.<issue_comment>username_1: >
> My visa is going to expire
>
>
>
You need to elevate this outside of "academic" circle and into administrative. Talk to the following people:
* student adviser (member of your department)
* international students office
* department chair
These are people who are professionally interested in you succeeding and helping you with paperwork / regulations. If your official status and other bureaucracy gets in the way of the academics (defense) they should help.
---
Do not use the following kind of language or even have the attitude of blaming your professors without evidence. You don't know their schedules and most likely they are not doing this out of spite:
>
> They are very jealous people who pretend that they do not have 2 hours in the following week.
>
>
>
Upvotes: 6 <issue_comment>username_2: You can try what aaaaa recommends and escalate the issue. Depending on your department structure this may or may not work (it probably wouldn't in our department).
What you can and should do in that case is (1) **directly** talking to the involved people (phone call, checking in at their office, or at the very least a personal email), and for this to work to (2) change your mindset and expectations around this matter.
That the paperwork took 2.5 months to complete isn't the fault or problem of your committee (unless they explicitly blocked it), and it might not even be outrageously long. Answering a poll within 3 days isn't particularly slow either. Hoping that a committee consisting of multiple professors would have a free slot that works for all of them *in the next week* was a tall order to start with - it's not surprising that the outcome of this poll is that no slot works for everybody. And, without knowing your department processes, it still feels something went badly wrong in your planning that you are a week away from when you hope to defend without a specific defense date agreed by everybody (since that's normally one of the first things you do when you start arranging your defense, i.e., months ahead of the actual defense).
Most people *are* willing to work with students who are in a difficult situation and move some appointments around, but certainly not if you approach them with a mindset that they are "jealous", that it's inconceivable that they indeed find it difficult to agree on a multi-hour slot on very short notice, and that they are morally obligated to bend their work to your needs. If you are polite and show understanding that you are asking for a lot, I am convinced you will see that people are a lot more willing to help you make things work. That said, "next week" is almost always going to be a struggle - it may be more realistic to plan for two or three weeks in the future, or whenever is the next time slot that people can make work with some goodwill.
Upvotes: 5 <issue_comment>username_3: If your visa is soon to expire, can you go back to your home country and have the defence via video call?
I had my PhD defence a couple of months ago via zoom, it worked out fine. I believe my university is doing all of them online at the moment due to the pandemic
Upvotes: 3 <issue_comment>username_4: If your thesis director agrees it’s time to defend, then the other members of the committee will have accommodate your schedule. You have done the common courtesy of asking for availabilities, so someone else needs to take the bullhorn and get the message across to these busy people else the department or graduate school should be able to find substitutes and proceed.
I do not know of departments or people not (in the end) accommodating the candidate, although it may at times seem like you or your thesis director are trying to push a rope.
Upvotes: -1 <issue_comment>username_5: You gave them a poll and they voted. You chose dates that do not align with their calendars. I don't see the issue... You simply need to offer more dates.
* I noticed they pretend that they are very busy, and not willing to make some compromises.
If you have proof that they are pretending to be busy, then raise a formal complaint of negligence with the university. Otherwise, you are just assuming something that may not be true. I, for instance, am assuming that you understand what proofs and assumptions are if you intend to receive a PhD...
In the absence of proof, you should ask a more general question of availability... Like are there certain times or day, certain days of the week, or certain days that it is not possible? If there really is an underlying conflict, then probably you will be allowed to change the people on the committee. This is typically the student's responsibility.
If I were you, I would be careful about how you proceed. Professors are indeed very busy people. It seems to be that you are asking them to drop everything and do what you want. Furthermore, you are trying to make them look foolish and petty. How do you think it will play out during the defence if you try to make am ass of them now? You do realize that it is not a foregone conclusion that you pass..?
I understand that there are time constraints on your side. Those are really your problems though. In hindsight, it would have been wise to reserve some Save-the-date options months ago if you knew that you had such time constraints.
Upvotes: 3 <issue_comment>username_6: A few years ago, I had the same problem and I couldn't get an agreement on a date. I used a simple trick in the poll I sent to my busy defense committee members, and it worked.
Usually, students ask committee members to check the dates and hours they **are available** for the defense. I did the opposite and asked each of them to check the dates and hours they **are not available**.
It looks like silly and you might think the committee members would just swap the check boxes. However, I noticed a significant difference in the poll answers. When you ask for availability, they only choose the best dates and hours in their comfort. When you ask for the opposite, they take a look at their calendars and check the conflicting dates and hours. You'd be surprised with the results.
Upvotes: 4 <issue_comment>username_7: Working with your committee is like trying to herd cats.
One strategy is to set an early, inconvenient time and you may suddenly get replies from everyone. A sympathetic member of my committee suggested this and it worked.
```
Subject: Thesis Defense for @AVA123 Scheduled
Tentative date for @AVAs thesis defense has been scheduled for Monday,
April 1 at 8 AM in conference room Romeo. If you cannot make this date and
time, please suggest alternate date(s) and time(s) when you are
available.
```
Upvotes: 2
|
2021/08/02
| 1,177
| 5,005
|
<issue_start>username_0: I have just seen on Google Scholar that one of my papers received 2 citations from a certain group of authors in 2 already published journal articles. However, the work they are doing has nothing to do with my work. It is not even the same discipline (control systems vs. material science). And the statement with the citation is something that is not even closely related to any of my papers. So I am quite confused as to how this could happen. I do not have any problems with their citation. It is just completely wrong.
Now I am wondering what is the most appropiate thing to do. I see 3 options:
1. Don't do anything
2. Contact the authors and tell them that they are wrong about my work (and that my work is not even closely related to their work).
3. Contact the Editor of the journal and tell them about it.
I'd appreciate if you can share your experience on such topics.<issue_comment>username_1: I suggest that you just write to them (jointly) and ask what they believe the connection to be. They may have something interesting to tell you about their thinking, so give the mail a "neutral" tone, simply requesting information.
I wouldn't tell them they are wrong, and I wouldn't complain to any editor until you know more.
Perhaps they see some synergy that you are missing. It might even open the possibility for some cross-discipline collaboration.
It is possible, of course, that it is just padding or trying to benefit from your good name, but it might be something with actual value.
Upvotes: 4 <issue_comment>username_2: Start from asking what your ideal outcome would be, and then work backward to how you can get there. Or at least use the process to identify *whether there is anything to do* to begin with.
The point is: There is no actual harm to you from this citation: The worst that can happen is that someone reads these other papers, follows the links to the citations, and realizes that your paper isn't at all related to the other two sources. That isn't your fault: *You* didn't put the reference in. It just reflects poorly on the other authors. On the other hand, you just earned two citations that might be useful for getting a new job, getting promoted, etc.
As a consequence, I can see that it is disappointing to open your email and see that you got two citations, and then to find out that they are mistaken. But I cannot quite see why that would be worth doing something about. There is nothing for you to gain by following up with the authors, nor is there likely anything to be done about the issue: Journals aren't going to retract or replace papers just because there is a mistaken reference in it. It's true that you can spend a substantial amount of time on the issue, but I see no outcome or reason that would warrant the time investment.
Upvotes: 4 <issue_comment>username_3: For a slightly different take on this, compare it to general web hyperlinks.
If I create a web site, I can include links to anything & everything: Google, articles in major newspapers, government web sites, blogs, Stack Exchange questions, etc. Nobody can do anything about it, and it doesn't really matter! If I link to the [New York Times](https://www.nytimes.com/) does that mean that I have anything to do with the New York Times? No. Does it mean that the New York Times trusts *me*? Absolutely not. All it means is that I find the New York Times to be a useful place to visit for some reason. Perhaps I agree with their editorial opinions, or perhaps I disagree. Or I may just like their focus on New York City. Or whatever.
However, if I am perceived as an **expert** then my link means something. Not a lot, but something.
This is in fact one of the key issues with Search Engine Optimization. If I link to 1,000 other places, then my web site isn't any better or worse (well, could be worse if I am perceived as a link farm). Those 1,000 places don't get any benefit *or harm* from my link, since it is something beyond their control. But if I am an "expert" (e.g., a major well-regarded subject matter web site) then my link to you will count for "something". Enough "somethings" and search engines will figure out that maybe, just maybe *your* website is itself meaningful because of all the "experts" that point to it. But the arbitrary web sites (hey, read my Facebook page, I visited xyz web site!) won't affect that search engine ranking in any significant way.
Back to Academia. There are much more definite rules for how/when citations should be used in scholarly papers than hyperlinks in blogs (which have no rules at all). But you could have journal citations that reference something useful (prior relevant research), something reasonably interesting (similar fields) or something seemingly irrelevant. Unless the article that includes the citation is itself from a top author or (due perhaps to relation to current events) the article becomes widely cited in general (non-scientific) media, **it doesn't matter.**
Upvotes: 1
|
2021/08/02
| 422
| 1,900
|
<issue_start>username_0: This is my first publishing experience (field: social sciences). I have an article accepted for a journal (sage publications) after minor revision. I have undergone the proof read and revision stage (submitted my proof and responding a week ago and my open-access option).
But today, I received an email from the editor asking for further revision (actually, a challenging one). The editor writes that there was a miscommunication on the journal's side and I shouldn't worry because my paper is still "accepted." I am writing to ask if this is a rare situation? Do you really have to go through another process of revision?<issue_comment>username_1: I won't comment on how rare it is other than to guess that for something major it is pretty rare - and a bit odd. But until you have something that can be considered a contract where you are, they can require it. Not knowing the actual request, it is impossible to say if they would be justified in withdrawing acceptance if you don't comply. There might be some regulatory reason for the request, in fact.
Probably you should do this, unless you are willing to withdraw the paper and submit elsewhere. But you need to judge whether the request is *reasonable*. If so, it might be in your best (career) interest to make a final revision.
One option that is open, is to discuss the situation with the editor and determine the reason and the necessity.
Upvotes: 2 <issue_comment>username_2: **Yes**
Ultimately nothing gets published unless the editor(s) approve of it, so if they request revisions you'll have to make them.
As for whether it's rare, the answer is also yes. Usually there are no more revisions requested after a paper is accepted. The underlying reason this is happening is, as the editor says, a miscommunication on their part. People are more likely to communicate correctly than incorrectly.
Upvotes: 3
|
2021/08/02
| 694
| 2,966
|
<issue_start>username_0: Suppose, someone is taking a course that requires a programming language to be known by the student.
Suppose, the course is [*Optimization*](https://www.youtube.com/playlist?list=PLDFB2EEF4DDAFE30B). The teacher proposes **MATLAB/Octave** to be the programming language of choice. However, the student that we are talking about doesn't know MATLAB/Octave. So, he proposes the teacher allow him to use another programming language, say, **Python**.
What should the teacher do in this case? Should he allow the student to use Python, or should he tell the student to stick to MATLAB/Octave?
Explain why.<issue_comment>username_1: That's up to the professor. Maybe the professor knows that a lot of the work to be done for this course is well supported by existing Matlab packages but that there are no good Python packages. Or the other way around and the professor wants students to implement things from scratch, rather than just use black box methods.
It's also possible that the professor just doesn't know any other programming language and wants to be able to give thorough feedback on student submissions.
You can certainly propose using a different programming language. Whether or not the professor allows you to do so, who knows.
Upvotes: 4 <issue_comment>username_2: First, do not ask it here, ask your teacher.
It depends on what the aims of the course are. If it is part of the aims to teach MATLAB (for example, because later courses will build on this knowledge), then the answer you will get is a "no, sorry" - if you are lucky you even get the reason why not. However, if the programming language is irrelevant for the course requirements, it is possible you will get permission to use something else.
As a teacher, I did allow my students to hand in their assignments using different software but I warned them that, if they get stuck, I will not be able to help them. I had a few students who chose this option, they all asked it individually.
And a final point: also consider that it could be beneficial for you to learn a new skill.
Upvotes: 4 [selected_answer]<issue_comment>username_3: Many novice programmers regard 'their' language with religious-like fervour. They gather/communicate with fellow enthusiasts, they evangelise to those who use other languages, they read books and journals with 'Python' (or whatever) in the title: it becomes an important part of their self-identity. They support their chosen language just as some support their chosen football team.
They have to grow out of this phase. For a 'python progammer' to write a MATLAB program is not a betrayal, an act of apostasy. A programmer needs to know several languages, so they can adapt to different problems and different situations.
Being a student is supposed to be about learning new stuff, not sticking with the comfort of what you know. This student should seize the opportunity to learn a second programming language.
Upvotes: 3
|
2021/08/02
| 4,994
| 21,167
|
<issue_start>username_0: I do not understand why universities are putting “pressure” on staff to obtain research grants. Well, in certain fields, research grants might be important for getting instruments or equipment, but in other fields, what we really need in order to produce research outputs is really just a good library.
This question might seem naive, but it seems no one is willing to explain this to me in a frank and clear manner.<issue_comment>username_1: Quoting [username_8's answer](https://academia.stackexchange.com/questions/100923/how-are-junior-professors-evaluated-for-promotion) to another question:
>
> Funding is all. If you don’t get funding, you can’t do research. If you don’t get funding, you definitely won’t get promoted. In my university, if you don’t get funding you won’t pass your probationary period to get a lectureship, and at many places if you don’t keep getting funding, you are in danger of losing your job. Not all funding is equal, because not all funding comes with overheads: that is, if I apply for a grant to buy a $ 100,000 piece of equipment and the funder gives me $ 100,000, then the university gets nothing. There are different ways of seeing this. You could see it as the university wanted to profit from the grant. Or you could see it as the university needing to find the funds to pay my salary, heat, light and clean my office etc. Making the university a lot of money is primarily what gets you promoted.
>
>
>
Funding lets you hire students/postdocs, buy equipment, buy computers, etc. The overhead on the grant also helps pay the electricity bill, pay the cleaners, pay for the library, etc.
Money makes the world (and universities) go round.
Upvotes: 4 <issue_comment>username_2: * Universities do **not** apply for grants to get grant money. Grant money is spent on expenditures the university would not have made if it did not have the grant. Receiving a grant usually causes the university to lose money. The university must spend money on compliance, administration, space, utilities and grant applications (70-80% of which fail). The university may receive "overhead" money from the grant agency, but this is less than the costs.
* Universities **do** apply for grants to gain prestige. Grants are the ultimate competition in academia. If you win grants, you are the best. The research resulting from the grants also contributes prestige, but that is secondary.
Upvotes: -1 <issue_comment>username_3: Overheads from grants are a substantial net source of income: if your grant is X, the university (in the US) will directly claw back a non-negligible percentage from X to keep the lights on (the researcher gets X-overhead). In Canada, overheads on Tri-Council grants are usually paid on top of the grant, directly to the institution, on a sliding scale (less than most US places I know).
It is true there is cost incurred (electricity, heating, maintenance, admin etc) in doing research but if you run a Uni you have these costs anyways so the *marginal* cost to the university of having researchers is much smaller than the overhead amounts they receive. Research income is increasingly crucial in times of diminishing (state) government funding of higher education.
There are multiple indirect benefits. Because grants are competitive, research income is often taken as a proxy for the quality of the faculty. If you have research-active faculty, they will need graduate students who usually bring in tuition. Research is also a recruitment tool: research opportunities for students are often a major factor in selecting a university. It is also a fundraising tool: [Phil Knight](https://en.m.wikipedia.org/wiki/Phil_Knight), the founder of Nike, has endowed [multiple research chairs](https://en.m.wikipedia.org/wiki/Philip_H._Knight_Chairs_and_Professorships) at the University of Oregon and at Stanford (in addition to multiple hundreds of millions of dollars in donation to the universities.)
Sometimes, faculty members will start a spin-off company based on research results they obtained, and universities will get a slice of those revenues. [Gatorade](https://en.m.wikipedia.org/wiki/Gatorade) is a product developed at the University of Florida, where the sports teams are named “Florida Gators” (hence the name); the university got of piece of those revenues (Gatorade was sold to PepsiCo so I don’t know how things sorted themselves out after.)
Now… one should really take this in context because in some areas (philosophy, music, English literature etc), there are very few grants, and it doesn’t mean faculty in these departments aren’t doing research or don’t have good ideas.
Upvotes: 4 <issue_comment>username_4: Given that other responses are quite US-centric, here a counterpoint: in Germany (and possibly many other continental European countries), there used to be no pressure for grant money for regular-sized research, as departments endowed their professors with sufficient funds for typical research in their field. Exceptions would be large-scale projects which required significantly increased budgets or special governmental programs one could apply for.
About the '00s, this model changed and increasingly the US model was adopted. It is difficult to point to a specific reason why this happened. Notably, German universities are typically getting their money from the state and not from endowments, so essentially, this constitutes a redistribution of governmental funds in what is considered a more competitive process. It is not clear whether this is indeed the most effective use of funds, as now researchers time is invested in the process of writing proposals, reviewing them, writing reports etc., all including the high failure rate of such proposals increasing the volatility of research funding continuity and high churn.
As for the argument of overheads, again this is effectively just an accounting model. In the UK, it was explicitly changed from a "the university pays for the infrastructure, the government pays the salaries and equipment" to "everything is separately accounted for".
It is again not easy to disentangle to which extent this has really changed how money is being used or whether it is now used more appropriately; many such accounting rules are conventions, but one consequence is that it changes price tags on projects, reducing fundable project volume, while the institutions need to run their infrastructure whether or not they run a project.
There is no question that there are ambitious project ideas that require funding beyond the regular and for this, funding is essential. However, the idea that even the funds for regular research activity needs to be competitively applied for has distinctly originated in the US, due to the specific structure of the US university funding model.
It is not clear at all that the European university funding model needed such a strong veer towards the US research funding model; it was a specifically political decision.
One possible explanation was the intention to save money. If that was the intention, it is not clear whether it was successful, because with the significant friction of competitive funding processes, the output per invested money may actually have gone down instead of up (note that in, say, Germany, this is mostly government money anyway, whether in the universities, or the funding agencies).
A second possible explanation is the desire for competitiveness. The idea is that researchers with higher quality research should get more money. This may be partly working, but it also favours proposal-churning models of operation and not-too-adventurous and risk-averse research, no matter what the calls say. Very outlandish projects are not likely to be funded.
A third possible explanation is gatekeeping. Funding distribution models permit more influential researchers (and political agents)
to direct research agendas. Instead of curiosity-driven research, this enables agenda-driven research. To be fair, if one bewails the loss of curiosity-driven research, one should remember that over most periods, research was mostly agenda-driven, whether alchemists hiding gold dust in their oven, or Galilei giving out telescopes to his patrons. The concept of a curiosity-driven scholar thrived in particular pockets of time and space and was never a universal feature of university. Nonetheless, one will notice that some of the most lasting and celebrated achievements of science emerged from permissive niches of curiosity.
These are several reasons I am aware of, maybe I will extend this list with additional ones.
Whatever the reasons, with the adoption of the US-type perspective on research funds, university gain not only money, but prestige from being able to acquire grant money. As such, grants are considered a way to measure the research prowess of an institution, with large grants indicating more prowess. Funding volume follows scientific excellence with significant delay and averaged over time - rarely does the volume of a project directly correlate with its scientific value (with some exceptions, again, such as large-scale instruments and experiments).
**TL;DR:**
While ostensibly universities need grant money to fund personnel and overheads, in its basic form, this is a US-centric perspective. For Europe, this is not ultimately an explanation, as there used to be different funding models for universities. The move towards a competitive grant-based funding model has many potential reasons, ranging, for instance, from the desire to increase competitiveness, save money, or be able to set the agenda of scientific research.
As such giving importance to external funding is a political decision, and as consequence, European universities have begun to vye for grant money not only for reasons of finance, but also of prestige, power and influence.
Upvotes: 6 <issue_comment>username_5: I’m in pure math, so my perspective is that of someone who really only needs “a good library” (and these days, not even that a lot of the time) to produce good research. Indeed, in my area funding is a lot less crucial than in the experimental sciences that rely on large amounts of both infrastructure and human labor. In that sense, there is a kernel of truth to what you’re saying.
Despite that, your question still contains a false premise. Even “a good library” employs a fair number of people and costs money to build and run. Likewise for all the other buildings and offices where all those smart brains that “only need a good library” work their magic. More importantly, those brains belong to people, who need to be paid for their efforts. Many of them are graduate students, who need to be trained to do research by much more experienced (and expensive to employ) faculty before they can begin to produce anything approaching good work. So, the research component of a university, even in purely theoretical areas, ends up being a rather costly enterprise to run, with high expenditures on both physical and human infrastructure. And of course, unlike a for-profit corporation, there is no obvious source of incoming funds to fund those activities based on their immediate economic value to society, which is close to zero.
Anyway, the pressure that you mention is not equally strong in all areas. But it is a fact that the research mission of a university needs money to operate, and grant funding is one of the main mechanisms for obtaining that money. Applying for grants is less fun than most other components of a researcher’s job, so universities come up with various ways to, er, motivate their researchers to be be successful in that aspect of their job. It is not inconceivable that such incentives are designed in a way that makes them perceived as “pressure” by at least some people.
Upvotes: 4 <issue_comment>username_6: >
> [I]n certain fields, research grants might be important for getting instruments or equipment, but in other fields, what we really need in order to produce research outputs is really just a good library.
>
>
>
At least in Germany and Austria, but probably also in other EU countries, grant money funds a large chunk of positions (salary etc.). Many postdocs apply for grants to fund their own (precarious fixed-term) position these days.
Upvotes: 3 <issue_comment>username_7: This is not a naive question, but as you can see from the many good existing answers, how universities are funded isn't straight-forward (and grants are a substantial part of it).
I will try to explain based on my own university ([Chalmers University of Technology](https://www.chalmers.se/sv/Sidor/default.aspx)). Some of the specifics will be, well, specific to our university, but in broad strokes I have seen similar dynamics at play at every university I worked at.
We are a well-known, privately owned research university in South-West Sweden. Despite being "private", we are tightly integrated into (and a substantial player in) the Swedish educational system, meaning our positioning in practice is somewhere between a traditional European public university and a "real" private university.
A research university with large undergraduate programs, such as Chalmers, really has two jobs:
1. "Education" (in form of bachelor, master, and PhD programs, as well as lifelong learning)
2. "Research" (including basic research, innovation, technology transfer, etc.)
This two "jobs" bleed into each other (one of the key ways how innovative ideas are "transferred" into industry is by teaching undergraduates, and PhD education and research are so closely related that the border is basically impossible to draw), but those are essentially the two big things the government and society at large expects from us.
Unsurprisingly, both of these "jobs" cost money. Education needs teachers, research needs researchers, and both need administrative support staff. Buildings and other infrastructure need to be maintained, and many other small and large costs accrue. Sadly, neither of these "jobs" actually *generates* substantial money directly - in Sweden education is free (students pay no tuition, even at our private university), and with the exception of the rare patent- or spinoff-generating research, even world-class publications do not pay salaries or utility bills. At Chalmers, just like at most universities (at least in Europe), all our core functions are substantial loss leaders.
Hence, Chalmers needs income streams. The government pays us for graduating students, which is sufficient (more or less) to cover the "education job" at least on bachelor and master level.
However, who pays for research and PhD education? As a private university, we do not directly get money from the government to hire research staff. We do have an endowment, but it's not the size of typical US endowments. Hence, the only way to actually do research is by getting some external organization to pay for most or all of it. **This is exactly what a grant is.** Importantly, this typically includes all costs associated with doing research, not just equipment: salary for PhD students, some prorated amount of salary for administrative personnel, offices and other infrastructure costs, travel expenses, and laptops. Importantly, at Chalmers, even the researcher themselves will often "cover" some fraction of their own salary from their grants, to "buy out" themselves from other university duties.
In conclusion, *even a researcher who needs zero equipment outside of a library* still needs grants to actually conduct research. Otherwise they will have very limited or no money to fund students, limited money to travel or publish, and potentially no time to actually work on the research since they can't "buy themselves out" from other university duties. The key idea to understand here is that grants are really how universities pay for the "research job" they are asked to do, since doing research, by itself, does not generate money.
Upvotes: 4 <issue_comment>username_8: @CaptainEmacs' answer does a good job of explaining why it doesn't have to be this way, if the world were to choose a different system of university funding, but it doesn't really explain why universities behave the way they do given the current system.
Other answers deal with the question of overheads, and there is some discussion over what the marginal costs of research in terms of overhead, and how much of this would be costs the university would have irrespective.
But there is one big item that the university definately has to pay, irrespective, and that is the salary of the academic.
My contract says I spend 40% of my time teaching, 20% on admin and 40% on research. Lets just say, for the sake of argument, that student fees cover the 40% teaching time, and half the admin time. That still leaves 50% of my salary to find from somewhere.
UK universities (where I am) don't have endowments - what capital they do have (generally in the $10m's rather than the fractions of $1b) is generally kept as cash reserves, rather than income generating investments. Their is some government block grant for research, known as the quality-related research grant, but this does not cover every one's salary.
If I charge 10%-20% of my time to a grant (which is average), that is 10%-20% the university doesn't have to find from somewhere.
But what of the costs of applying for the grants? Is that greater than the 10% I get? Well, no, because generally if my funding is running out, I write new applications on top of all the things I was doing anyway, in the evenings and weekends, not instead of - thus while there is a cost involved in applying for grants, often, the individual, rather than the university foots that cost.
---
To turn briefly to CaptainEmacs' point about why things have changed: Firstly, they haven't changed that much in my field, in the UK - research funding was still important 40 years ago, but perhaps not quite as much as now. So what has changed?
Well, one thing that has changed is that we have gone from 10% of young people going to university, to 50%. That means a large increase in the number of academics to teach them (not quite 5x, but still quite a large increase). If all those academics were traditional research & teaching positions, it would have meant a similar increase in research. But the increase in government research funds is way, way less than that.
One solution would be that everyone spends less than 40% of their time of research (i.e. a change in the classic 40:40:20 model). But instead what happened was that some people funded their research through grants, and stay 40:40:20, while others didn't and became almost fully teaching focused, leading to research intensive institutions, and teaching focused institutions, where as previously all universities had been "research intensive".
Upvotes: 3 <issue_comment>username_9: I would say it all boils down to a fundamental lack of leadership. At a governmental level and at the university's level. Governments motives when it comes to education are typically not in the best interest of students or universities, but still they have to be unhappy bedfellows and somehow coexist.
I personally know that the great decline of my local university coincided with the day when an accountant was made chancellor. I personally know of music students who were accepted to a music program with so little theory training that I knew they had virtually no chance of passing their first year of harmony studies. The standard of acceptance gets lowered to anyone who can pay, but what is required to pass your first year still remains the same. The university accepts students who they know for certain have no chance of passing but because they can pay, the come, they pay, they fail and then they leave. None of this being a problem for an entity driven by profit.
When I think about the sad state of these affairs I think about what I once read on the Oxford university's website. They mention that the whole of the music department on average admits 70 - 80 new students every year. The department also has around 30 full-time members of staff.
I cannot help but think to myself that there is probably no way that such a low student-to-teacher ratio could be managed without it costing the university more money than what it makes. That could probably be said about many departments at Oxford.
Do you think the Rector at Oxford would ever consider closing the music department over something as vulgar as money? A discipline that has been a part of Oxford since the middle ages? Money is not the metric of its success.
There are for sure some liberal arts colleges in the US which I would love to be a part of. <NAME> is a whole university dedicated to the character traits that I admire most in women. Does that university make lots of money probably not. Does it produce women whom I would consider lucky to know, for sure?
It just seems unfortunate to me that money is how we consider the success of higher learning, it is a rather one dimensional outlook on what it's influence can be on society.
Upvotes: 0
|
2021/08/03
| 462
| 2,023
|
<issue_start>username_0: I have a master's degree in genomics and data analysis from Aix Marseille University.
Unfortunately, I haven't done so well with my master's courses and have gotten bad grades all along, and my annual average was very low (11.63/20). I had financial difficulties so I had to work on the weekends and sometimes after classes so I wasn't very focused on getting the best grades. Another factor to my failure was that I had to move to another country and depend 100% on myself.
During my master internship I did well, and my supervisor offered me a contract to work as an assistant engineer for a year in a research lab and was featured in a recently published paper.
I want to have a research career and have decided that I would like to get a PhD in a French university. However, I am very hesitant and feeling defeated because of my low marks. Is there any chance for me to get accepted in a PhD program?<issue_comment>username_1: Master grades are only one of the criteria used to assess potential PhD candidates, and there is so much variation between different masters that it's not used as an absolute measure. Of course good grades help, but in general I would say don't worry too much about it. If needed don't hesitate to clarify that you had some personal difficulties, but you might not even have to.
Try to emphasize your motivation for doing research, insist on the good experience in your master internship. If your master supervisor can write you a good recommendation letter that would certainly help as well. If possible show that you have some of the skills needed for a PhD: for example organisational skills, writing skills, good English level. And of course try to target a supervisor who works in a domain/topic that you're interested in.
Good luck :)
Upvotes: 2 <issue_comment>username_2: The grade in itself may not provide a complete picture (scales may differ between universities and fields); your class ranking, if you know it, might be important as well.
Upvotes: 1
|
2021/08/03
| 376
| 1,587
|
<issue_start>username_0: I have a paper in the AAAI proceeding. Before that the proceeding was published, I put the paper in arxiv. Now, The AAAI version of the paper doesn't appear in google scholar but only the arxiv version.
How can I fix it? I know that the proceeding is already crawled by google scholar because I have another paper in that proceeding which is already in my google scholar profile.<issue_comment>username_1: One suggestion: The versions are not identified as identical. That is, the final version in the proceedings is not the same as the arxiv.org version, eg. you might have addressed some reviewers comments. Possible fix, update the arxiv.org version.
However, to be frank, this is just a guess, since I am actually facing the same issue for one of my papers and a simple update did not work. I also noticed that in my case the abstract contained a URL and some tex formatting that was not properly shown on arxiv.org. Not sure if that is causing any issues.
Upvotes: 0 <issue_comment>username_2: If you go to your Google Scholar profile there will be a list of papers.
At the top of that table is a "Plus" sign. If you click this and then "Add Articles" you'll have a dialog of papers Google thinks might belong to you and can also search for articles. I often find alternate versions of my papers listed here---sometimes because Google doesn't recognize them as mine, sometimes because it's reading an inexact citation.
You could also consider adding the article manually under that same dialog, though it may not accrue citation information.
Upvotes: 1
|
2021/08/03
| 2,766
| 11,784
|
<issue_start>username_0: I had to take a group oral exam on Microsoft Teams. The exam situation that we received was exactly the same as one of the practice situations that the professor had given us during class.
The examination officer was in the room with us during the exam. She told us the reason she couldn't send us the pdf of the exam was that they apparently reuse the same situation every year. We did get different questions about the situation though. So it was not entirely the same, but we definitely benefited from having worked on it before.
In another class, the professor gave us all the open book exam questions during class as exercises. So during the exam, I literally just copied all my work into the exam. Some questions were a bit different but not many. Needless to say, if you had done all the exercises and attended class, you would've passed with copy-paste.
In other classes, the professors just change the numbers or the questions a little. Again if you attended and did all the exercises, you would've passed.
Is that normal?
Sometimes I have the feeling that high school was harder. At least that teachers had higher expectations and came up with new exams each year. It felt way more intense but I also had more classes in high school than now.
Thanks :)<issue_comment>username_1: Different professors have different ideas/philosophies about the relative place and merit of exercises vs exams. I don't think that what you are seeing is ubiquitous, but some will see exercises as nothing more than practice for exams. Another factor that is related is the relative importance of exercises v exams in grading.
I am, however, surprised that you've run in to so much of this already. I'd expect that most students will see some of it, but not repeatedly.
It may be, but I can't predict, that the professors put a very low value on teaching in relation to their research. There might be other explanations, such as too many part time faculty.
My advice is to do what you have to do to learn. Read. Do a lot of exercises. Ask a lot of questions. I doubt that you will have much success in changing this situation. Hopefully you will find more challenge as you go along.
Upvotes: 4 <issue_comment>username_2: Unless the main goal of "education" is gate-keeping and filtering, I think there is no mandate for *teachers* to create *difficulties* for students.
I do realize that there is a huge tradition of challenge-and-response, and other combative stuff. (I'm in math in the U.S., ...)
It was an epiphany for me to see, in grad school in math at Princeton in the 1970s, that genuine math [Edit:] at the edge of what is known, active research [end-of-Edit] is already so challenging that there is no point in creating fake/artificial challenges. Rather, senior people should *help* beginners dodge difficulties. Not *create* difficulties for them.
Also, as in the design of our Written Prelim exams for grad students here at my univ, there is really no sense in coming up with crazy questions. Rather, there is a fairly short list of iconic (and important!) issues that we'd hope our people can respond to reasonably. [Edit:] Even though at a graduate level, this material has been worked-over and refined over the course of years, and operates smoothly. [end-of-Edit]
That is, in fact, [Edit:] well-established [end-of-Edit] mathematics is... if done well... quite simple, useful, memorable, etc. Not hard.
(I'm not a fan of its use as a filter/gatekeeper...)
Upvotes: 5 <issue_comment>username_3: There is only so much different stuff you can prepare students for in the expectation that they'll be able to deal with different material.
That being said, I remember a multiple-choice exam in Theoretical Electrical Engineering where a lot of questions were quite similar to those in old exams (which students collect and do practice runs with) while subtly differing in a few words. Make no mistake: the principal person responsible for that exam was out to get students in more manners than just that.
I think there was a non-trivial number of students scoring below the third of correct answers you were expected to get by just making random choices.
Note that this was in Germany (and a university considering itself an elite university) where passing ratios are not proscribed and there are no tuitions that would entitle a student to anything. Even then, that particular mandatory course was somewhat singular in its reputation.
The question is what kind of goal you are trying to achieve with that kind of testing.
Upvotes: 1 <issue_comment>username_4: Normal when? Where?
When I was in uni (in Germany), studying CS with a side dish of maths, some decades ago, there was no concept of preparing for tests by using previous tests (or even exercises), at least in the smallish group of fellow students I was regularly in contact with. You would listen to the lectures (or not); you would go to the exercises (or not); you would do your homework (or not). Except for some specific profs, nobody cared either way what you were doing, as long as you passed the exams.
It was on the individual to both pick their courses of interest (with a few mandatories), and to learn as much and as deep as they wanted. In the actual exams, it was understood that all topics of the semester had a chance to come up.
I do not recall if I ever had a déjà vu during a test; certainly I did not repeat all the exercises before the exams. To give you a comparison regarding the difficulties: The exams were no pushovers - I did go to the lectures, I did my homework, but was otherwise quite chill back then, having the good fortune that those topics mostly came pretty easy to me due to interest and previous experience. I finished quite nicely, but at the exams which today would be called "bachelor" roughly 50% of my fellow students were removed mostly through mandatory maths courses (CS was and probably still is part of the maths department at my uni, and it showed). My state was (back then, before the madness of [PISA](https://nces.ed.gov/surveys/pisa/) came along) considered to have a tough school and uni system.
So, no, what you are asking was not normal in my area at that time, and I would consider what you describe as either foolish or lazy. I do not know exactly how it works today in my country, but what I witness from school and early uni education from my children, I wouldn't be surprised if things like that happen here these days as well.
Upvotes: 2 <issue_comment>username_5: The situation in an exam is quite different than in real life. In real life, you usually have much more time to think. You can solve problems in your home or office, in a relaxed atmosphere. You can consult friends, search the Internet, etc. In the exam, you cannot do all this. So, arguably, the homework assignments are the true measure of your ability.
On the other hand, grading only based on homework assignments is problematic, since some students copy or buy solutions. So the compromise is to have an exam very similar to the homework, with some small technical changes. If you did your homework on your own, then the exam should be easy; if you copied / bought a solution, you will most probably fail due to the small technical changes.
To sum: your teachers probably regard the homework as the real challenge; the exam is just a way to verify that you did the homework by yourself.
Upvotes: 6 <issue_comment>username_6: >
> She told us the reason she couldn't send us the pdf
> of the exam was that they apparently reuse the same
> situation every year. We did get different questions
> about the situation though.
>
>
>
That sounds like your course might be using a standardized exam. Your instructor may not have any control over the content of such an exam. The department mandates that all sections of course X use these pre-written tests and the instructors just have to go with it.
In my own personal experiences, courses with standardized exams like that tended to be taught such that students had the best chances to score well. A standardized exam means the department can compare results between sections to evaluate how well an instructor is doing relative to their peers. That gives instructors an incentive to teach to the test and ensure the best passing rates possible. It's debatable whether the results of such instructor comparisons are meaningful or whether any of this improves student learning, however it can explain your instructor's "teach to the test" attitude. This same sort of thing is also *extremely* common with standardized tests in high school (it's one of the biggest arguments against standardized testing).
You didn't specify what year you were, but the courses like this that I experienced were first-year courses. Students are coming from different high schools and different locales, each with a different curriculum and level of rigor. A course that seems easy to you might be much more challenging for a student from another state/country where high schools demanded less than yours did or had a different curriculum. Courses like this allow the difficulty to ramp up slowly instead of just jumping into the deep end (which can disadvantage students who were unfortunate enough to attend a lower-quality high school). By the time I got to my third or fourth semester, though, courses like this were long gone.
Upvotes: 2 <issue_comment>username_7: In addition to the other answers…
The first thing to realise is that in many universities the lecturers are researchers first and lecturers second. They are also not usually held to the strict standards that the majority of school teachers have to attain (e.g. University lecturers in the UK do not need to do a PGCE whereas all state school teachers need to have that qualification or equivalent), this means that the lecturers haven’t always been versed in the various methods of teaching (although arguably those courses aren’t always that useful at educating on day to day teaching).
Universities also tend to set their own standards, as do the individual lecturers, so there is much more variability than there would be for a school which takes part in national standardized qualifications.
Having said all that, there is very real criticism that education in many areas has evolved from being teaching to learn skills, to teaching to pass exams, and there are papers in educational research which are highlighting this problem.
Upvotes: 0 <issue_comment>username_8: If you did your homework and you did actually learn the material covered in those home exercises then you should do well in an exam. Why not? And that applies even if the homework isn't graded and does not count towards the final mark. Ours (20 years ago) was just voluntary. Only exams counted towards the credit and only the final exam counted towards the final mark. But if you did all the exercises from the book and you understood how they are solved, you were well prepared for the final exam. I think it should be like that.
That means not just to test that you did the graded homework you submitted yourself. Even if the exercises are to be done without anyone checking them and grading them for you, if you just follow the exercise book, if you do your exercises and you do understand the math or other logic behind them, you should be well prepared for the exam problems. It would be bad teaching, in my opinion, if it wasn't the case.
Upvotes: 1 <issue_comment>username_9: Yes. It might encourage rote-learning rather than understanding, but good "results" make the boss of the professor happy because the boss is judged on producing good exam performance data. If this sounds horribly corrupt to you, I can only say: Welcome to planet Earth!
Upvotes: -1
|
2021/08/04
| 2,376
| 9,698
|
<issue_start>username_0: I am a first year PhD student. The amount of funding that I receive, in my opinion, is bare minimum. After covering all of my expenses, I'm hardly left with anything or in some of the months, nothing. And, let me tell you that I'm a thrifty and economical person. I'm not spending my money carelessly (tbh, I won't be able to even if I want to). Also, I have heard some mathematicians complaining about how difficult it was for them and their families to manage during the initial years of their post-doc.
Also, the kind of work we do is not "easy". I'm not saying that in other professions people have an "easy" life, but I think there are few professions (maybe outside academia) where people work less hard than us and are still earning more. Many people don't even get enough time to enjoy life with their families because of their research, seminars, supervision, etc.
I know I'm just a first year graduate student, I won't be earning a lot. But looking at the bigger picture, it makes me wonder: Are mathematicians(including graduate students) paid enough? or in general, are people in academia paid enough? If not, then why are people not trying to change the situation?<issue_comment>username_1: I went to grad school in math straight from undergrad. Looking back, I definitely think that students considering grad school should make sure they have some financial stability before going to grad school. I would advise my former self to work for a few years at a "real job", save up some money, start a retirement account, etc. Then, once you have that, you can go to grad school without worrying too much about not having much of a salary, because you have money saved up and you're not living paycheck to paycheck, so to speak. And your retirement account money will be accruing interest while you're in grad school.
As pointed out in the comments, salary varies quite a bit depending on the university. Private universities probably pay a lot more. I went to a public university. We went on strike for better pay. It would be nice if we didn't have to do that.
One benefit to being a grad student, though, is that as long as you maintain good academic standing, you should be fully funded for several years (around 5 or 6). This means you basically have a guaranteed source of income and you don't really have to worry about being fired. This is in contrast with many "at-will" employers, where you can be fired at any time and, oh well, you're out of work.
Also, if your advisor can fund you with a Research Assistantship, it basically means that you can get paid to do math and not have to teach. You can wake up whenever you want, sleep whenever you want, work some days and not others, etc. You basically have complete freedom, and you get paid!
In addition, funding could be tight during the summer months, with fewer students enrolling in courses. When I was in grad school, summer funding was typically given but not guaranteed. If you can get a summer internship in industry, you could be paid a lot more than you would by teaching summer courses. And you would pick up valuable industry skills. So consider setting aside some time to look into that.
---
In general, what counts as "enough" varies from person to person. Some people need more money to support their lifestyle and their personal needs than others. However, one thing some mathematicians do is the following. They apply for a position at a different university and get it. Then they tell the department chair, "I got a position at this university, and they will pay me X amount, which is more than I'm making here. You'd better increase my salary, or I'm out of here!" The department says, "OK, we'll pay you more. Please don't leave!" And they say, "Thank you. I'll stay."
Upvotes: 2 <issue_comment>username_2: `united states`
The average living wage in the US is [$16.54 per hour, or $68,808 per year, in 2019, before taxes for a family of four (two working adults, two children)](https://livingwage.mit.edu/articles/61-new-living-wage-data-for-now-available-on-the-tool).
The [average salary](https://www.salaryexpert.com/salary/job/mathematician/united-states) for a mathematician is $121,259 per year. This ranges from $84,254 for an entry-level mathematician (1-3 years of working experience) to $150,869 for those with 8+ years of experience.
Since $84,254 > $68,808, I conclude that mathematicians are paid more than enough.
Upvotes: 0 <issue_comment>username_3: Think of the person collecting your garbage, the janitor who keeps the building clean, the ... There are many people who work very very hard and get little pay. In a market economy you don't get paid based on how hard you work, or how valuable your work is. It is based on supply and demand. So if you want to change that, you have to move away from a market economy. You don't have to go full communist, there are hybrid systems that work fairly well (though very far from perfect). Further discussing this is far outside the scope of this forum.
Upvotes: 3 <issue_comment>username_4: You said in a comment that your question is specific to academia. For the United States, a good source on how much mathematics faculty in academia are paid is the American Mathematical Society's [Faculty Salaries Report](http://www.ams.org/profession/data/annual-survey/salaries), based on a survey sent annually to mathematics departments at many universities. Their [latest report](http://www.ams.org/profession/data/annual-survey/2018Survey-FacultySalaries-Report.pdf) is from 2018-19 and contains salary statistics for different academic ranks at different groups of universities (split into different categories such as large public university, medium public university, small private university etc).
The report has a lot of data so I won't try to summarize it, but to address your question about whether academic mathematicians are paid enough, we can see from the report that the median annual salary reported for a full professor in the "large private university" group (page 5 of the report) was $155,000, the mean salary was $182,160, and the third quartile salary was $195,000. I think most reasonable people would agree that such numbers qualify as "enough" to be paid for doing something that you love. Indeed, for a lot of people this would be more than enough.
At the lower end of the spectrum, if we look at salary distribution for new assistant professors at institutions that grant only bachelor's degrees (page 11 of the report), we see quite a different range of numbers. Here the median salary was $57,500, and the first quartile salary was $47,500. Whether that's "enough" would depend on one's outlook, personal situation, and future plans and prospects.
Bottom line: the most successful people in academic math are doing quite well. Some of the least successful might be struggling. Mathematicians are a diverse group, and the question of what counts as "enough" is a complicated philosophical question anyway, and a rather subjective one at that.
Upvotes: 4 [selected_answer]<issue_comment>username_5: On its own more money is always better on an individual level and I certainly did not complain about the last raise I got. But as a counterargument consider the global optimization problem for academia.
What do we (as mathematicians) want as a goal for the whole system? I thing the answer should be the most (good) mathematics, both in teaching as in research. Now the thing is that we have to achieve that on a fixed budget, coming from all kinds of funding sources.¹ So paying people better generally means hiring less people.
And this is the point where it gets complicated. While you will generally get better mathematicians if you pay more, the effects are far from linear. If you pay double, you will never get people who work twice as fast, which on its own would pull salaries down to zero. On the other hand, the pool of available competent mathematicians depends on the salary offered as well. As an extreme example, if you offer less than the cost of living for your area, you are limited to a enthusiasts with independent income (if they don't go somewhere else). If you combine these two factors, I'd argue that you will end up roughly where we are right now.²
What adds to this is that money is not everything, especially if you get it for doing something that you would probably do in your free time anyway. My current post-doc salary is not huge, but it pays for a reasonably sized apartment, puts food on my table and even after occasionally wasting too much of it on things I don't actually need, there is generally a reasonable percentage left over. If you pay me more, my spending would not change. I could probably retire earlier with all the money I would save up, but the thing is, I do not want to retire, since I like doing what I currently do.
Of course with this, I am not speaking for everybody. But in general academia cannot compete with industry using money alone. There are certainly some investment bankers that could have been brilliant mathematicians, but who currently earn more than entire university departments.
¹I know that the budget is not fixed in a strict sense, but getting e.g. a big increase in budget for opening a new department does not change the problem, as you are expected to produce proportionally more work. I also assume that salaries are generally corrected for inflation.
²In my opinion PhD-students are generally somewhat underpaid and professors salaries might be a bit above the optimum, but the same discrepancy is true in any profession. And as mentioned by others, the numbers are highly location dependent.
Upvotes: 2
|
2021/08/04
| 420
| 1,903
|
<issue_start>username_0: At what stage should acknowledgements be inserted into the paper if review is double-blind?
Is it only possible at the very end, when preparing the proof version?
Or is it possible once the paper is accepted with revisions?
By acknowledgments is meant acknowledgments to colleagues helping with some comments on previous versions of the manuscript, but also to the reviewers for their remarks.<issue_comment>username_1: When submitting an article to a double blind review process, it is advisable to leave all potential identifiers out of the paper until the review process is finished, this would include acknowldedgments as well, see also [this instruction](https://www.journals.elsevier.com/social-science-and-medicine/policies/double-blind-peer-review-guidelines) from Elsevier.
You can indicate that there are some omitted acknowledgments by including a statement in the acknowledgement section like "The acknowldgements have been omitted in this version to maintain the integrity of the double blind peer review process", if you want the reviewers to know.
Upvotes: 3 [selected_answer]<issue_comment>username_2: There should be clear instructions *when* in the process your paper is to be unblinded. This includes adding author information and acknowledgements, as well as doing other smaller text changes you may want to do as part of the unblinding process (e.g., sometimes you may want to add more explicit pointers to your own previous work, which you may have removed in the submitted version in the spirit of double-blind).
If you are submitting to a journal, there are probably instructions about this on the "Author Information" (or similar) page. If you are submitting to a conference there may be instructions on the web page, or at least you will likely receive these instructions along with information how to submit your camera-ready version.
Upvotes: 1
|
2021/08/04
| 1,149
| 4,539
|
<issue_start>username_0: I submitted my PhD thesis a couple of months ago and am in the weird place between submission and waiting for the viva. The last 4 years have been *hard*, and I can no longer face the idea of staying in academia, research, and i'm even fatigued of my subject area (psychology). I've taken up a temp placement in a completely different area since i have no idea what I want anymore but need money and industry experience. It feels very weird to be doing something unrelated psychology, i'm having to learn a new subject area from scratch. I'm still feeling the burnout of the past 4 years and am still coming to terms with the fact i've completely lost all motivation for the goals I pursued the PhD for in the first place, and feeling like there was little point in giving all of that energy towards something I no longer want. I'm tired, confused, sad, struggling to come to terms with figuring out a new path and am dreading preparing for the viva etc as I am so done with it all (but of course i've come too far to give up now). I'm also struggling with knowing my worth within an industry setting because all of the high pay grades they say you can expect post-PhD only make sense to me if I was going into a post that i'm specifically qualified for (e.g., post doc research). I couldn't feel confident applying for a high-paying role which isn't what the PhD trained me for. I've spent 7-8 years studying psychology and learning how to be a researcher, I'm now put off going for any further long-term training because what if I end up deciding its not for me again?
How do people cope with that feeling of 'what now?' after deciding to leave everything you've been working towards? I've heard/read that most people who leave academia/research end up much happier in the long run, but how to cope with the here and now burn out, lack of motivation and the sense of losing one's identity (at least in terms of what you felt you thought you were 'meant' to do), as well as a sense of feeling like there is little reward at the end of the struggle. I guess i'm just feeling a bit lost and wondering what other people's experiences are. Thanks.<issue_comment>username_1: While the question is specific to your pursuit of a PhD, if you just think a bit, it is no different from choices all people make. Someone took up medicine and it did not work out after eight years. Another built himself as a retail professional only to see covid and on-line retail killing any passion that could exist in a quota-focused role.
In essence it's all about learning (maybe) and moving on. The world has never been a place of greater opportunity, and making a decent living is not terribly hard.
As I've heard said - most of us are standing with one leg in the past (regret) and one leg on the future (anxiety) and pissing on the present.
Let the past go and embrace your choices you make now. And yes, these too can turn our 'wrong' - make other choices.
Upvotes: 0 <issue_comment>username_2: >
> How do people cope with that feeling of 'what now?' after deciding to leave everything you've been working towards?
>
>
>
Take some time off after graduation. Finishing a PhD is a lot like completing a marathon. You're drained, and need to rest - and that's ok. Things seem really bad right now, but you've still got really great options even if you don't continue researching.
>
> but how to cope with the here and now burn out, lack of motivation and the sense of losing one's identity (at least in terms of what you felt you thought you were 'meant' to do)
>
>
>
You don't have to plan out the rest of your life now. In fact, you probably shouldn't. Use this time to explore hobbies you may have ignored during school. Many people who pursue PhDs are very goal oriented - I'm guessing this applies to you. You've got plenty of time to figure out your next goal.
>
> I'm tired, confused, sad, struggling to come to terms with figuring
> out a new path and am dreading preparing for the viva etc as I am so
> done with it all (but of course i've come too far to give up now
>
>
>
You're right - stay the course. Do whatever it takes to cross the finish line which is VERY close. I've know other PhD students with your sentiment - they were done, but too close to the end not to see it through. They are all happy successful people now.
Finally, most universities offer some level of therapy for students. If nothing else, a therapist will be a neutral 3rd party, which I think would be beneficial.
Upvotes: 2
|
2021/08/04
| 1,407
| 5,852
|
<issue_start>username_0: On Wikipedia, people sometimes write articles on their research topic or add their results to existing articles (probably in order to increase their visibility). I make this assumption if the Wikipedia user name matches the name of the researcher in some way.
**Is this in general considered bad practice?** (Note that there is a related question:
[Wikipedia article about PhD thesis](https://academia.stackexchange.com/questions/88030/wikipedia-article-about-phd-thesis), which is however different, since it concerns publication of results on wikipedia prior to their peer-reviewed publication)
There are some sections in the Wikipedia guidelines, that are important here:
* [What is conflict of interest?](https://en.wikipedia.org/wiki/Wikipedia:Conflict_of_interest#What_is_conflict_of_interest): *"Subject-matter experts (SMEs) are welcome on Wikipedia within their areas of expertise, subject to the guidance below on financial conflict of interest and on citing your work. SMEs are expected to make sure that their external roles and relationships in their field of expertise do not interfere with their primary role on Wikipedia."*
* [Citing yourself](https://en.wikipedia.org/wiki/Wikipedia:Conflict_of_interest#Citing_yourself): *"Using material you have written or published is allowed within reason, but only if it is relevant, conforms to the content policies,(...) and is not excessive. (...) When in doubt, defer to the community's opinion: propose the edit on the article's talk page and allow others to review it. However, adding numerous references to work published by yourself and none by other researchers is considered to be a form of spamming."*
Proposing an edit on an article's talk page, however does not work, if you want to write a new article.
In the end, I came up with the following pro and con list:
**Pro**
1. You are an expert in your field, and one of the goals of Wikipedia is to share your expertise
2. You are thus leaving the ivory tower of science and try to explain your research to the people in a less scientific way
3. People, who are interested in your method, will maybe google it, find Wikipedia and get a brief introduction in addition to the references to main articles
**Contra**
1. While there is no obvious financial benefit, there is still some conflict of interest, since the gained visibility can help obtaining research grants etc.
2. You are quite biased and less likely to add criticism to your own work or unfavorable comparisons to other methods.<issue_comment>username_1: Anyone, even someone with a [conflict of interest](https://en.wikipedia.org/wiki/Wikipedia:Conflict_of_interest#Writing_about_yourself,_family,_friends), can write a [Draft article](https://en.wikipedia.org/wiki/Wikipedia:Articles_for_creation). It will be reviewed and you need to disclose your connection.
It will initially be a stub if it isn't rejected by the editors. But it won't likely become a full article until others add to it and comment.
Wikipedia isn't a place for monographs and the editorial process will guard against abuse.
You can also "propose" an article for your topic without actually providing content. Again, the editors will make a judgement about putting up a stub.
You can also [ask this question directly](https://en.wikipedia.org/wiki/Wikipedia:Help_desk) of the Wikipedia editorial staff through their help/contact pages.
---
Disclaimer: I have no relationship with Wikimedia, but have made a few edits to Wikipedia articles, one of them a bit substantial. I am not an expert.
Upvotes: 3 <issue_comment>username_2: One of the key concepts at Wikipedia (and also one of the most controversial) is "notability". Wiki guidelines aim to discourage people using the site as a vehicle for self-promotion, and with good reason, but that can sometimes be over-zealously enforced (as with <NAME>, whose Wiki [page](https://en.wikipedia.org/wiki/Donna_Strickland) was deleted as non-notable shortly before she won a Nobel...)
Creating an article about your own work is very likely to be interpreted as self-promo, much more so than making improvements to a pre-existing article. At a bare minimum, you will need to provide clear evidence that the topic is important (e.g. coverage by sources not linked to yourself) and be prepared to answer the question "if this is so important, why didn't somebody else already write about it?"
If possible, it might be better to find a broader framing, e.g. instead of creating an article specifically about your work, create or expand on an article about the field, including reference to the work of others.
Depending on the field, there may already be a group of editors who take an interest in that topic. If so, it may be productive to engage with them - "hi, I'm RealName, interested in adding some content about blah blah blah, does this seem reasonable?"
Upvotes: 5 [selected_answer]<issue_comment>username_3: It's not frowned upon. It might not be obvious, but you *are* free to write whatever you want on Wikipedia as long as it's not obviously bad, and Wikipedia is not likely to object. It's only when the editor who added the material attempts to overrule other editors on what should and should not be kept that Wikipedia starts to object.
If you just write the article and leave it to its fate you're not likely to encounter any issues.
The two cons you mention don't seem like a big deal:
1. If there is increased visibility, more power to the person who wrote the article, and others should a well.
2. You might be biased and less likely to add criticism to your own work or unfavorable comparisons to other methods, but if other people think the article is biased + care about it, they will add the relevant criticism or unfavorable comparisons (or they might tag with NPOV).
Upvotes: 0
|
2021/08/04
| 885
| 3,944
|
<issue_start>username_0: This question is restricted to the domain of Cryptography.
While reading research papers from the Cryptography domain, I came across papers that provide examples for the schemes or algorithms they proposed.
I feel that there is no use in providing simple examples except pedagogical.
And the examples do not provide any extra information or insights regarding the scheme or algorithm. Some authors use a separate section to provide an example of their proposed scheme or algorithm.
Is it a recommended act to provide a simple example for the scheme proposed in research papers (especially SCI-rated) journals?
I am using the word **simple** because some readers may think that authors are using some large or special numbers in their example. But it is not the case. The numbers are small and with easy properties that can be done by even a beginner or intermediate in the domain.<issue_comment>username_1: As mentioned by lighthouse keeper in the comments, the culture of including examples will depend on your discipline. But within your discipline, you should ask yourself: *why am I including this example?*
When writing a paper, I try to keep this general idea in mind:
>
> Only include information that will be helpful to someone.
>
>
>
Examples can be great expositional tools in a paper. Did you just introduce a definition that seems technical, but the underlying idea is easy to understand? A well-executed example illustrating the underlying idea would be very helpful!
On the other hand, there are examples that will help no one. They may be too simple, too convoluted, or something else entirely. I have seen examples that are simultaneously too simple and too complicated: the example is overly basic for expert readers, and non-expert readers will not have enough exposure to the material to get anything out of the example.
It is something of an art to balance writing a paper that is enlightening to experts and also readable to new researchers. I certainly have not mastered this art, but this skill is what makes papers by certain authors (like <NAME> in math) an absolute delight to read.
Upvotes: 4 <issue_comment>username_2: If you include an example inline in a paper, it can obscure the flow of the argument (by putting extraneous material within it), and it increases the length (which may add to intimidation, or may push you up against venue page limits).
Whereas, a good example shows what your argument actually means in practice, which can help bring lofty ideas down to the real world and readers check their understanding[1]. This can help make your work more accessible—not just to newbies in the field, but to experienced veterans who are too proud to admit incomprehension!
One way to get some benefits of both approaches is to pull the examples out into an appendix. This keeps the main body of the paper short, but gives people the opportunity to follow up on concepts that might have eluded them in the main text.
In the end, you're overthinking this; it's just a writing choice. It usually won't be particularly standard or nonstandard[2]. Unless your venue has a specific guideline, it's down to a choice of the author(s). I *personally* recommend short examples, which I find interweave nicely with the actual expository text in the first place. This is because I infinitely prefer a slightly longer, but clear, paper, versus a terse and confusing one. However, that would be an opinion only.
---
[1] Caveat: I have gotten burned multiple times in paper reviews for making the work *too* accessible. Once people understand something clearly, they tend to mistake it as being obvious.
[2] I guess in Programming Languages research, examples are highly recommended in certain classes of paper. I could see this being the case for Cryptography, too. However, the point remains as stated: this is a choice—at best a tendency, not a requirement.
Upvotes: 2
|
2021/08/04
| 1,797
| 6,910
|
<issue_start>username_0: What tried and tested possibilities are there to set up a (nice) poster for a conference without using LaTeX?
I find LaTeX always cumbersome for that purpose and I am curious about alternatives.<issue_comment>username_1: In the past, I've found MS Powerpoint to be a very acceptable way to make nice posters since it supports large paper formats and scaling images well. If you need equations, you can make those in a standalone LaTeX doc and cut and paste them from the PDF to the poster pretty reasonably. It's been a few decades since I've had to do this, so LaTeX may have an improved way of doing this now, but it's there to back you up for better looking equations.
Upvotes: 6 <issue_comment>username_2: I often use [Adobe InDesign](https://www.adobe.com/products/indesign.html). InDesign isn't an introductory program, but I don't find it to be hard to use to do simple things like arrange text and color boxes.
Universities often have copies somewhere, for example, a set of workstations in a library. You could also pay for one month at a time (it's a subscription service) when you need it.
Upvotes: 4 <issue_comment>username_3: Modern HTML + custom CSS works very well, assuming sufficient web-development experience or a good template. Done well, it has significant advantages over the usual "fixed layout" of LaTeX, Office, etc; specifically:
* It adjusts to your reader's device, if you share the poster after the conference
* It is accessible: it allows reader to use screen-reading technology, including for math, in a much more reliable way than PDF (or, gasp, image files)
* It is easy to edit (boxes reflow automatically)
And of course, it can be printed to PDF as needed. [Here is an example](https://people.csail.mit.edu/cpitcla/links/The%20Essence%20of%20BlueSpec%20%e2%80%94%20A%20Core%20Language%20for%20Rule-Based%20Hardware%20Design%20(PLDI%202020%20poster).pdf)
Here is a template that I built not so long ago: <https://github.com/cpitclaudel/academic-poster-template/> . There is also a [concrete example](https://cpitclaudel.github.io/academic-poster-template/koika/poster.html) and a [tutorial](https://cpitclaudel.github.io/academic-poster-template/tutorial/poster.html).
There are a few examples of use from other universities in the "forks" list on Github. For posterity, I include a screenshot:
[](https://i.stack.imgur.com/En9aO.png)
And one of the mobile view:
[](https://i.stack.imgur.com/1IIxx.png)
Upvotes: 5 <issue_comment>username_4: My recommended tool for this is [Inkscape](https://inkscape.org/). It uses vectorized shapes and is pretty intuitive to work with.
[](https://i.stack.imgur.com/zuGSo.jpg)
Upvotes: 6 <issue_comment>username_5: Well, some people use **Microsoft Power Point** (no idea how the manage) - **Libre Office Impress** is the Open Source alternative.
Given that a poster is more a layout, it might make more sense to look at **Microsoft Publisher** or **Libre Office Draw**.
Then there are **Adobe Illustrator** as another option and maybe also **Adobe InDesign**.
Depending on the type of layout and graphics you want to employ, you may even consider photo manipulation programs such as **Adobe Photoshop**, **GIMP** or **Krita** (Krita is great, on Linux and Windows). However for text heavy content you will have potentially a lot of issues organizing them.
Although I wouldn't call LaTeX cumbersome: LaTeX requires upfront work, but once you have figured out your approach of doing things, it is very efficient.
I'd rather figure out how to make something work in LaTeX (with the option of asking questions here or elsewhere) than fiddling with incoherent drag and drop formatting options to end up with a worse looking document...
Though some details might be subject subject dependent: MS Office is not able to properly print units (no half space) following SI recommendations and the typeset is too heavy and hence ugly for printer use (works well on screens though).
Upvotes: 3 <issue_comment>username_6: On the mac, i use Keynote for posters. It supports vector graphics so images and text scale well. It has good and simple tools for alignment and grouping (but text flow/wrapping is not good). It has pretty good grouping and aligning/distributing. If i am presenting a 2m x 1m poster I will set the document size to 2000 x 1000 pixels so that i know that 1 mm == 1px.
I have found that Keynote hits the right level of simplicity for throwing together eps/pdf graphics and some extra text and annotations.
In my experience more powerful tools, like inkscape or illustrator or inDesign have a steeper learning curve and are not really necessary for what i want to do.
Upvotes: 3 <issue_comment>username_7: In addition to all the great tools that have been suggested ( PowerPoint, Inkscape ) I suggest trying Microsoft Visio. It is was designed for schematics and plans but actually it works amazingly well for posters because it designed to allow clear and easy alignment. It might be an overkill solution but most of my students have managed to make great posters with it.
Upvotes: 3 <issue_comment>username_8: If it's not too complex, try the online tool [Canva](https://www.canva.com).
The free part of this has a share of templates, font styles and graphics.
You can add in your own images.
For a fast slap-up, this is handy enough. See attached example.
A more elaborate poster there's a subscription for the pro Canva.
[](https://i.stack.imgur.com/vwiue.png)
Upvotes: 2 <issue_comment>username_9: Similar to the person here who replied Inkscape, I very much enjoy creating posters and even slideshow presentations using the Gnu Image Manipulation Program (GIMP).
There are definitely pros and cons compared to LaTeX or Libreoffice Impress et al., but in general, if you have knowledge of even the most basic tools in GIMP, it gives you great flexibility and creative freedom, and you can do some really nice posters. My only caveat is you'd need to think how you want to layer the poster before you begin: it's flexible to change individual objects, but if you decide you want to go with an altogether different template half-way through, then it's not as forgiving. (unless of course you have such a template already made from a previous GIMP poster and you're happy to rearrange the individual elements manually).
Also, GIMP is very scriptable, with a relatively simple python interface. But that's if you want to go the extra mile for automation :)
Here's an example of a poster presentation a did a few years back for a conference:

Upvotes: 2
|
2021/08/04
| 1,186
| 5,325
|
<issue_start>username_0: Why is it so common for pure mathematicians to start publishing "good" quality paper a bit late? Maybe towards the end of PhD or start of Post-doc. Whereas in some fields people start publishing research papers as early as in their masters.
I know there are some undergrad research programs, for e.g.- REU, but only in some cases do they get published; but not in some good quality journals. Most of the times they maybe just arxived or in some cases not even that.
I know some people, from different fields, who had 2 or 3 publications in their masters itself, while some of them are in their 2nd/3rd year of PhD and already have around 10 publications.
Maybe such a thing is also possible in mathematics, but there must be some good reason why mathematicians or graduate student, along with their supervisor, agree to publish good results in good journals, a bit late.<issue_comment>username_1: One fundamental distinction between mathematics and more-obviously experimental scientific and engineering disciplines is that, in math, there is no analogue of "reporting on several months of experiments". There's no analogue of "keeping experiments running, and whatever turns out, is a paper".
That is, while there is in fact a large experimental aspect to mathematics, it is not the same sense of "experiment" as in other sciences, and the conventions are such that this does not generate papers.
Yes, the ambient pressures do push PhD students in math to try to arrange at least one or two publications prior to PhD completion. Not entirely a bad thing, but definitely not a convivial atmosphere.
Edit: and it may be worthwhile to observe that most of prior mathematics does not become obsolete or wrong due to new discoveries, unlike sometimes happens, and is always possible, apparently, in more experimental sciences/etc. Thus, mathematicians cannot too much disregard prior work... of which there is a great deal. Many excellent (and not-so-excellent) results known prior to 1900 are rediscovered on a regular basis, and do indeed show evidence of insight!, but are not "publishable".
This "problem" is the reason I tell myself, and my research students, to not even think in terms of "verifiable novelty", because it is just a mess, for mathematics. Better to follow a good, natural line of inquiry, and leave the appraisal of "novelty" till later. (It is admittedly hard to ignore academic-administrative pressures... and, yes, this is corrupting academic mathematics, among other disciplines...)
Upvotes: 7 [selected_answer]<issue_comment>username_2: A key difference between pure math and many other areas is that there are typically only 2, maybe 3 authors. The papers you cite with Masters students are often in larger collaborations wherein the Masters students do some of the menial tasks: data collection, data analysis, but not necessarily theory development that require more years of training. In pure math, these sorts of papers are quite rare: You *have* to understand the theory if you're writing with just one or two co-authors, because otherwise there is nothing for you to contribute. As a consequence, there is just no opportunity for students in pure math to be part of authors earlier in their career.
Upvotes: 5 <issue_comment>username_3: The other answers do a great job explaining why math is quite different from experimental sciences. But I don't think this is the full answer, as computer science, statistics, and theoretical physics are "mathematical sciences" and it's less clear why they should be any different than math. In fields where there's substantial overlap (e.g. there are people working on tensor categories in math and in condensed matter physics) I think there's also different publishing conventions in terms of what level of originality is expected of publications. For example, working out an example that any expert could do and writing it up is much more acceptable in physics than it is in math. There's advantages and disadvantages to both approaches (in particular, math ends up with far too much "folklore" that is hard to learn without social connections to experts), and I think a lot of this difference is cultural. For example, the culture in math departments is that the main goal of a PhD is to have a substantial result, while in CS the expectation is that you will have multiple less substantial papers. There's similar cultural differences within mathematics, e.g. combinatorics is more amenable to short papers and homotopy theory or Langlands number theory are more amenable to people who publish more rarely in larger papers. I think these cultural differences (like many inter-departmental cultural differences) are going away as there is increased pressure for mathematicians to publish more and publish earlier even if that means not doing as substantial work.
Finally, I should point out that most of the humanities is even more extreme than math in terms of focusing on a large substantial work, and the expectation is that your PhD should be a publishable book which you will still be editing several years after graduation. Again, I'm not saying this means math is "better" than other mathematical sciences, or that history is "better" than math, there's advantages and disadvantages to both approaches.
Upvotes: 4
|
2021/08/04
| 1,010
| 4,426
|
<issue_start>username_0: Over 6 months ago, I published a paper in a journal that is on MathSciNet's journal list, and has been for a long time. It's not the Annals, but it's a solid and reputable journal. My paper is in applied mathematics (applied to a different topic in the real world), but clearly of a mathematical nature. The primary interest is mathematics, more so than the results we obtain about the topic at hand.
So far, this paper has not been indexed on MathSciNet. Papers I have published after it have been indexed. Other papers from the exact volume of the journal have been indexed. In fact, other papers from the same volume of the same journal and the same journal subtopic as my paper have been indexed.
I've emailed them twice about this at <EMAIL>. Twice I have received the same response, "Thank you for your message. The paper you mentioned has been forwarded to the editors of Mathematical Reviews / MathSciNet for consideration." I have received this response 1 month and 4 months ago.
I question why and how Mathematical Reviews can be making editorial decisions on what "counts" for indexing or reviewing. Surely they do not have the time and resources to do a proper peer review of every paper. (Side note: every paper of mine that has been "reviewed" has just featured an exact quote of the paper abstract as its "review.") The paper has been reviewed and published in one of their listed mathematics journals, the AMS does have categories for applied mathematics that it includes, and the paper is clearly mathematical in nature (unlike say a philosophical paper appearing in a mathematical journal). Part of me feels like taking this as a slight, though I'll seek an alternative explanation.
Has anyone else experienced this and what should I do? In my environment, it's rather important to have papers listed on MathSciNet, rather than arXiv, Google Scholar, etc.<issue_comment>username_1: There is nothing you *can* do. MathSciNet is a volunteer organization, and their volunteers get to choose how they want to spend their time. If they are not interested in reviewing a paper -- because none of the volunteers is interested in the area, or because none of the volunteers believes that the paper is of broader interest -- then that's that. You can't *force* a volunteer organization to do anything.
Upvotes: 0 <issue_comment>username_2: Historically, publishers didn't make abstracts of papers available to indexing services, and thus it was necessary for Mathematical Reviews (the print-only predecessor to MathSciNet) to use volunteers to prepare "reviews" (basically summaries) of papers for inclusion in Mathematical Reviews. The volunteer reviewers and editors would also classify the papers according to the "Mathematics Subject Classification"
Over the years, many publishers have agreed to allow MathSciNet to include the publisher's abstracts and reference lists in MathSciNet. The editors or volunteer reviewers sometimes use the publisher's abstract rather than preparing an independent review, but not always. There's no guarantee that any particular paper will be chosen to appear in MathSciNet or that the editors will publish a review rather than simply using the publisher's abstract.
The editorial process is described at
<http://www.ams.org/publications/math-reviews/mr-edit>
Upvotes: 2 <issue_comment>username_3: I had a paper published by a physics journal for which only some papers are indexed and reviewed by MathSciNet. My paper was not indexed or reviewed. Other physics papers I wrote have being indexed by MathSciNet.
Notice they claim to "cover articles and books in other disciplines that contain new mathematical results or give novel and interesting applications of known mathematics" ([from MathSciNet](http://www.ams.org/publications/math-reviews/mr-edit)) and it does not say it will cover all such. In any case, novel and interesting is a judgment call.
My rule of thumb is that I only trust MathSciNet for pure mathematics from pure mathematicians. I love finding papers there, with nice reviews and good hyperlinks, but when I analyse the work of an applied mathematician I use other databases. Then those databases are so big I cannot tell if others' papers are getting picked up by accident.
This is why many of us carefully curate our CVs, our ORC ID pages, our websites. All the databases are full of errors and omissions.
Upvotes: 2
|
2021/08/04
| 1,181
| 4,842
|
<issue_start>username_0: I'll just rephrase what I had said in the comments, as I believe it summaries the question better, and to whoever downvoted ( assuming because of how "complicated" what I said was ) I hope this suffices:
There exists proof methods, particularly so whatever a claim may be, an undeniable truth arises from a well executed proof, and when the proof methods are carried out, you may arrive at a result that satisfies the claim therefore, but the problem in question may have another given in the claim that is unknown, or unapparent. How would one know there isn't some unknown or unapparent fact also applied to the claim. How would this effect a paper being published, and if this is fact, what would this say about how the community treats papers generally?<issue_comment>username_1: I think your question is:
>
> How do academics become completely sure that a paper is correct?
>
>
>
And the answer is that normally they do not.
Upvotes: 3 <issue_comment>username_2: I'll restrict this answer to the field(s) of pure mathematics and similar things. For other fields the standard will be very different, say philosophy or psychology.
First, math has a clear standard of proof and truth. To be true, in the modern conception held by most mathematicians, something has to be derivable from a set of axioms using well determined logical rules. But the chain of proof can be very long, depending on other things "held" to be true and previously proven. Some of the chains began in ancient times. Many others about a hundred or so years ago when axiomatic mathematics came to the fore.
Second, people make mistakes. The chains of proof can be long. They can also be very twisty and when there are many levels of abstraction in a proof it isn't terribly difficult to make an error. Sometimes a mistake is made because someone has an "insight" into a problem that is very subtly wrong, but seems right. This might cause them to gloss over difficulties. Others with the same background might make the same error. It might be effectively impossible to check the complete chain of proof, due to its length.
Moreover, prior to the axiomatization of math, the standard was quite different. Sometimes verification depended to some extend on applications of theory. Some of that was re-established axiomatically, but there is a huge body of "knowledge".
Third, the work of mathematicians, if sufficiently important, is checked by other mathematicians, who are, themselves, skilled, but not perfect. Not all errors are caught in the publication review process, even though papers are normally reviewed by a few independent mathematicians with skill in the particular field. Not all errors slip through, but a few do. And generally, the "checkers" are happy to show their work, explaining why they accept (or not) the proofs in a given paper.
Fourth, we normally trust the experts, but also generally maintain a wee bit of skepticism generally, knowing the history of long lived errors.
So, for a direct answer to the question, if a paper has passed review it is likely, but not necessarily, accepted by the mob. But "completely accept" is possible for simple results that have stood the test of time. Complex results and complex arguments, not so much. I once found an error by an important mathematician that had been in place for more than 50 years. It was buried deep in a complex proof but was, in its nature, somewhat elementary.
The wee bit of skepticism often leads professors to have their doctoral students examine the reasoning in old papers (just as I was) to verify them. The verification doesn't add much to the mathematics unless and error is found, but it can add immensely to the insight and understanding of the student.
So, the [answer of username_1](https://academia.stackexchange.com/a/173010/75368) is not incorrect at all, though it lacks detail.
"Trust but verify" is the standard.
What you do about it is verify in any case that is vitally important to you. But generally trust what other mathematicians, perhaps with more experience, have concluded.
---
In math there are four possible outcomes for a statement. True, if it can be proven (derived from axioms). False, if a counterexample can be found. Unknown if we have neither a proof nor a contradiction. Unknowable, since axiom systems [can't be both complete and consistent](https://en.wikipedia.org/wiki/G%C3%B6del%27s_incompleteness_theorems).
---
Caveat. I have a certain [philosophy of mathematics](https://en.wikipedia.org/wiki/Philosophy_of_mathematics) that a few others don't hold. My philosophy leads me to think in a certain way and do mathematics in a certain way. It is pretty widely held, but not universal. Others with a different outlook might just pop in with a rather different answer.
Upvotes: 2 [selected_answer]
|
2021/08/05
| 1,631
| 6,424
|
<issue_start>username_0: During the pandemic, at least the premier institutes of my country are operating in *work from home* mode. Since this lessens the physical activity of going around labs and classrooms daily I tried to setup a fixed timetable and working environment for myself to do learning and research and was successful in doing so.
But over time, I observed the following phenomenon
>
> Utilizing the day (fruitfully) to full extent if and only if there is
> no distraction. Distractions include very small tasks including
> meeting colleagues outside home and spending few minutes with them. Phone conversations with supervisor etc.
>
>
>
I am surprised to learn that my whole day becomes under-utilized if distraction happens during early parts of my day. And I am not in a position to know the exact reason behind it. Ideally, I need to break only a few minutes from my timetable and can utilize the remaining time. To my surprise, it is not happening. Either I am spending my time randomly or I am unable to properly concentrate on my work. This phenomenon generally did not happen or happened only to a very minute extent during non-pandemic days.
Is this phenomenon normal or do I need to completely prevent/avoid distractions?<issue_comment>username_1: Distractions are a tax on productivity, because you lose much more time than the duration of the distraction. It takes a lot of time and energy to switch between tasks and to regain focus. [Apparently](http://blog.idonethis.com/distractions-at-work/), you need on average 25 minutes to regain deep focus after becoming distracted. If you work four hours in the mornings and are distracted four times for five minutes each, half of your working time is lost before lunch break ((5 minutes + 25 minutes) \* 4 = 2 hours). This also applies to self-interruptions, therefore multitasking makes you less efficient.
Upvotes: 4 <issue_comment>username_2: The issue you're experiencing is the problem of **context switching**. Multitasking can result in a 40% productivity cost, make tasks take 50% longer, and increase errors. After an interruption it can take ~25 minutes to regain focus. (It's hard to find *good* citations for these numbers, unfortunately.)
The solution to your problem is to guard your time.
1. Determine what priority you should assign to what you need to do. Consider using the [Ivy Lee method](https://medium.com/the-ascent/the-ivy-lee-method-a-profoundly-simple-yet-effective-way-to-get-more-out-of-your-to-do-list-71dea851e3c).
2. Turn off buzzers, ringers, alert sounds, and dialog messages. These interrupt periods of focus. Leaving alerts on implicitly places others' prioritization of your time above your own. If you don't manage your devices they will manage you.
3. Consider using something like the [Pomodoro technique](https://en.wikipedia.org/wiki/Pomodoro_Technique) or [similar strategies](https://qz.com/work/1561830/why-the-eight-hour-workday-doesnt-work/) to interleave focused time with breaks and outside communication. While scheduling a whole day might be hard, committing to doing something without interruption for 30 minutes is something you must be able to do to get anything done.
4. Try to understand where your tasks fall in the time management quadrants and value/prioritize them accordingly:
[](https://i.stack.imgur.com/06JyB.jpg)
[](https://i.stack.imgur.com/FykAf.jpg)
[](https://i.stack.imgur.com/1vIgM.gif)
Upvotes: 4 <issue_comment>username_3: While the pomodoro technique and the urgent/important priorisation has already been mentioned, here's my personal suggestion, which includes both of them in some form or fashion:
* If you don't already have it, use an acceptable calendar app, shared between your PC and smartphone, one that integrates well with your mail system and whatever your colleagues are using.
* Meetings will be in the calendar anyways, probably.
* Now start putting *everything* in there. Especially if you intend to work alone on a topic, be sure to block that out. Don't forget break times (lunch etc.) - enter them as well.
* At the beginning of each day, check your calendar for today (this will also be your first calendar entry each day - "Clean up today's calendar"). Look for all conflicts and resolve them by moving them around, maybe cancelling invitations or not so important events. Here, the urgent/important distinction becomes important, of course. Your calendar for today should now be completely filled. Not only with work - there will and should be breaks in there as well, but they should be there explicitly.
* Configure your system so that mails do not pop up notifications, but your calendar does.
* Now comes the hard part: when your calendar tells you to switch to another topic, do it religiously. If you have not finished your current work, immediately find a new time slot (tomorrow or maybe today if there is another event which is not so highly prioritized) and plan it in there.
* When your 8 hours per day are up, or however much it is, *stop working*. Don't fall into the trap of doing everything that fell on the wayside during the day at the end.
Over time, this trains you to be methodical and realistic. If you notice that your time slots regularly are too short, then you might have to fine tune the size of those slots (for me, that's usually 30-60 minutes; 120 minutes very seldomly), or reduce the content you put in there. You want this to be a game: being done with what you intended to do in a slot should give you a little spike of joy.
You can and probably should, at least at the beginning, have buffer time slots; i.e. plan in an hour a day (or maybe one in the late morning, one in late afternoon) as a pure buffer. As you get used to it, this might not be so necessary anymore. If you run out of stuff to do in a buffer, do whatever - either take a walk, or pick anything else you wanted to do later, and pull it forward.
Reading your mail is an activity like any other - put in a time slot for it, and if possible avoid checking your mail all the time or at least ignore mails coming in during the day as best as you can (you might still look there to get updates on the things you are workin on right now).
Upvotes: 2
|
2021/08/05
| 459
| 1,969
|
<issue_start>username_0: I'm inquiring with professors about brief (~1 week) rotations in my post-bacc research programme. Our coordinator says we don't need to include a CV in our intro emails, but another advisor says I should. When should or shouldn't I include a CV in an introductory email?
Professors I'm inquiring with haven't previously seen my application information, and if they accept I'll potentially be working with them for up to 2 years.<issue_comment>username_1: Assuming that the professors are expecting these emails, it should be fine to include a CV, but a short one might be better than a long one. Focus it on the job at hand and things that might apply to that.
But a CV in a blind first email is probably wasted effort. I never looked at them. A blind contact should be just an introduction, expressing interest, with an offer to send more information on request. That might get read, while a long mail will get trashed too easily.
And, in either case, offer to send more information on request, provided you have it.
Upvotes: 3 [selected_answer]<issue_comment>username_2: I generally recommend including a CV in any introductory email in which you're seeking to join a lab or work under someone, but it should not be necessary to look at the CV to understand who you are and what you're looking for.
You can find other answers providing more detail on this, but a good introductory email briefly (in one paragraph) notes your relevant background, any preexisting relationship (in this case, that you're in the rotational programme), and what about the lab interests you (ideally noting the connection between your background and these interests).
Including a short CV or resume provides additional detail if they need it while avoiding a back-and-forth: remember these are busy people with overfull email inboxes, so reducing their cognitive load is in your favour. Also, depending on your CV this can be a kind of humblebrag.
Upvotes: 0
|
2021/08/06
| 978
| 4,183
|
<issue_start>username_0: For my undergraduate project, we created an application that provided a bunch of features for small companies to analyze their target customers and public opinions. It was opinion mining of social media networks basically. Halfway through the project, our advisor asked us to compile the work in a paper and publish it in Springer/IEEE conferences or journals. However, we are not required by the university to do this. We never intended to indulge ourselves in academia, publishing papers and what not. We just used a basic LSTM network, used existing datasets for the sentiment analysis part of the project. We did a simple comparison of models like BiLSTM and vanilla LSTM though. Our project is basically a product.
Despite telling them that the project we have done is quite trivial and no top journal will accept it (which is what I think anyway), they are insisting (or better, hell-bent) on writing a paper. The team is also going separate ways and work at companies starting next week, so we won't have time to work with this anyway. Where should I go from here?<issue_comment>username_1: You are allowed to just say no!
Assuming that you don't want to do that, here are two potential scenarious of what could be going on:
**Scenario A:** Your project actually has made a worthwhile (though certainly not groundbreaking) contribution to the knowledge in your field, and the world would be better off if this knowledge would be shared. You and your teammates may not realize this, due to lack of experience or self-doubts.
**Scenario B:** Your advisor just cynically hopes to get a quick publication out of this; or might overestimate the relevance of your work.
The following response is compatible with either scenario:
*Dear Prof X,*
*its great to hear that you think so much of our work, and would like to see it published! As I/we are not familiar with academic publishing, and will take up full-time employment soon, I/we will not be able to contribute to the writing though. You have access to all our documentation through our dissertation(s)/coursework submission/here is a link to a shared Dropbox folder. If you have the time and inclination to produce a joint paper from this, you can reach me here and I shall do my best to read the draft quickly and get back to you.*
If you are in Scenario A, and your supervisor finds the time, you may end up spending an hour or two to read the draft and sign off on it. While having published a paper might not do much for you, these two hours would certainly still be well-spent career wise.
If you are in Scenario B, that might just be the end of it. If it isn't, you have tried to be polite, and it is time to reiterate the "As pointed out before, I/we will not be able to contribute to the writing" ad nauseum (or ignore your supervisors emails).
Upvotes: 2 <issue_comment>username_2: I used to be in a similar position. I chose to listen to whatever the supervisor had said. But then, I ended up **wasting** a lot of time and didn't get published, wasn't even willing to post the paper on arxiv because I am **not even convinced** it has any value because good research needs intuition and original thinking. If you don't feel like having one, then talk to your supervisor with **logical arguments** explaining clearly why you think this way. Then cases may be like:
* Your supervisor is a reasonable person and gets persuaded. ✅
* You get persuaded by your supervisor because he naturally has a broader vision and better idea that you don't have, and he is willing to share them with you. ✅
* You cannot talk your supervisor through. Based on my humble opinion, it is possible that
1. He doesn't even have a clear idea of the future path of your project. But he may just want to get you running thereby increasing even a tiny chance for him to get another paper at the expense of your time and mental health, which he doesn't care about. ❌
2. He simply doesn't have enough research experience. So he is not trustworthy on this matter. ❌
But I believe it really depends on the specific situation, the above is only based on my experience. You need to make the decision.
Upvotes: 0
|
2021/08/06
| 577
| 2,081
|
<issue_start>username_0: I am in the process of conforming a paper to the APA style guide. However, I recently came across submission guidelines for an academic journal, and the following distinction stood out to me:
>
> If you want to refer to the paper use 'Appiah 1986' (without
> brackets): 'Appiah (1986)' refers to the philosopher - it means
> 'Appiah (in his 1986 paper)'.
>
>
>
I have two questions about this:
1. Does this convention conform to any widely recognized style guides, or is it likely specific to that journal?
2. Does the APA style guide draw a similar distinction between referring to a work (parentheses excluded) versus referring to an author with respect to a work (parentheses included)? So, for example, which of the two conforms to APA? (i) "In one form or another, the argument is to be found in Fine 2002" versus (ii) "In one form or another, the argument is to be found in Fine (2002)"
I haven't been able to find a clear answer to these questions.
Thanks in advance!<issue_comment>username_1: 1. Both are fine, they fit into the sentence slightly differently so it is often used together to make the text smoother and less repetitive.
2. No, because the structure remains the same, so no reason to differentiate. I think the first version is also not used in APA, the second version is what is used commonly (year between brackets).
Upvotes: 0 <issue_comment>username_2: This distinction makes sense. The point is that it should be possible to delete whatever is in parentheses and the sentence should still make sense. The parenthetical part is just extra information to allow someone to find the source.
So
>
> The first person to prove this was Smith (2005)
>
>
>
passes the test: the main sentence is "The first person to prove this was Smith," and if someone wants to check this they should look at the 2005 paper. However,
>
> The first paper to prove this was Smith (2005)
>
>
>
doesn't make sense without the "2005" ("Smith" is not a paper, but "Smith 2005" is). So you shouldn't put "2005" in parentheses.
Upvotes: 1
|
2021/08/07
| 1,761
| 7,469
|
<issue_start>username_0: **Background:**
I'm in my 4th year on a tenure-track position. In June, I received an outside offer for associate professor after applying at a university where a former colleague of mine had moved to. When bringing up this offer with my current chair, he told me that, if I stayed, he would make sure that I can be reviewed for early tenure and promotion at the end of the current academic year, and my case will have no issues passing. He reiterated the same thing during an email exchange after our meeting. As the uncertainty of getting tenure was what triggered me to seek the outside offer in the first place, I was happy enough and declined the other offer. (I'm sure you can see where this is going...)
Well, now the year has ended and, after inquiring about the tenure review process, I found out it was decided that I'm not going to be reviewed for tenure this year. No more information was given to me, as the chair is still on leave for a couple of weeks.
I've discussed my situation with some senior colleagues at other institutions and the sobering advice was that I shouldn't have relied on anything that a chair/dean assured me, even if it's in written in an email, as only a signed contract has any real significance. Thus, there is basically nothing I could do. Also, I was advised to better avoid rocking the boat, as I wouldn't want to be derailing my tenure-review next year...
**My questions:**
* Anything I should/could do about the current situation? E.g., should I bring this up with the dean or even the provost? (One friend suggested this but it's unclear to me what this would achieve.)
* Did I make a strategic mistake during this process that could have avoided this outcome? Is it generally considered to be a risky move to bring up an external offer with your current supervisor?
**Additional context:**
I believe to have reasonable chances of eventually getting another position elsewhere, and I'm not too worried about burning bridges. However, it is very likely that I would still need to spend (at least) the coming academic year at my current institution.<issue_comment>username_1: Your senior colleagues are correct, in a way. In academia as well as in industry, a "promise" (independently of whether it's an oral promise or written down in some way) isn't a hard guarantee. People forget, people change their minds, people go on leave or step down, and sometimes a chair has simply promised something that they were then unable to push through administration. It sucks, but there is simply no 100%, fool-proof way to ensure that a promise made now for a time in the future will actually still hold when that time arises.
That does not mean that a promise is worth *nothing*. Most people will try to keep their promises, after all, and I have no doubt that most promises are actually kept. But it's not the same thing as a hard guarantee, and it never will be. Hence, if you are faced with a decision whether to take a different offer with tenure *now* or stay at your current place with a *promise* of tenure, you should take into account that the promise of future tenure always comes with a certain margin of risk. Whether that's worth it depends on the positions, the trustworthiness of the people making the promise, and how bad the "fail case" would be for you (e.g., I generally recommend postdocs to take a faculty position "now" over a promise of a slightly better faculty position later, because "no position" is a pretty severe risk; but if the fail case is "I got tenure a year later" the world will not end).
>
> How does a (junior) faculty member ensure prior promises are being kept?
>
>
>
You mostly can't. Getting things in writing certainly isn't a bad start, and I generally try to recommend people in time of their prior promises, but in some cases (e.g., changing responsibilities, dean has overstepped their authority, etc.) really nothing is sure to work.
>
> Anything I should/could do about the current situation? E.g., should I bring this up with the dean or even the provost? (One friend suggested this but it's unclear to me what this would achieve.)
>
>
>
I would certainly bring it up with the dean. They most certainly screwed up, one way or another, and a decent dean will try to fix it (and if that's not possible at least compensate you in some other way). Going above the dean is probably not very useful - a provost isn't going to see a promise made via email as binding, and this may indeed be seen as "rocking the boat" by your senior faculty since it doesn't exactly paint them in the best light (maybe correctly so, but prior to tenure you are still very much dependent on their goodwill).
>
> Did I make a strategic mistake during this process that could have avoided this outcome? Is it generally considered to be a risky move to bring up an external offer with your current supervisor?
>
>
>
I don't think you necessarily made a mistake. As I wrote above, there is always "risk" to accepting a promise, but that doesn't mean it's always a mistake to take this risk. As poker players always like to point out: just because a play didn't win doesn't mean it was a mistake. I probably would have done the same as you did, unless the other position was also more attractive.
That said, I would bring it up with the dean as I would be expecting some sort of compensation for the broken promise. And I would look at future promises made by the same people or organisation through the lense of them having a track record of not keeping their word in important matters.
Upvotes: 4 <issue_comment>username_2: I think you are making a subtle mistake in thinking about this situation as a case of “a promise isn’t being kept”, where in fact the issue is not that the chair isn’t keeping his promise, but that he had no authority to be making it in the first place.
The granting of tenure is not within the purview of a department chair, but happens after review by multiple university committees and administrators outside the department. People cannot meaningfully promise that something will happen when it is outside their control. Thus, the statement “only a signed contract has any real significance” is misleading, since the chair has no more authority to sign a contract that says you will get tenure than they had to write an email that says the same thing.
I cannot advise you on what to do, but it’s important to understand what your mistake was, so that you don’t repeat it. Promises that are documented in writing are generally reliable even if they don’t carry the legal force of a contract, since most large organizations know they cannot hope to have any credibility with their employees if they throw around empty promises and then don’t honor them. However, when an official promises you something, it’s an obvious red flag when the promise involves decisions that are beyond the power of that particular individual. You should never trust such a promise, but instead seek to get confirmation of the promise by the people who actually have power over the relevant decisions.
Moreover, on the specific matter of granting tenure, no one who actually has the power to promise you tenure will ever make such a promise. The only way to be completely confident that you will get tenure is to go through the tenure review process, cumbersome and time consuming though it may be, or to accept a tenured job offer elsewhere.
Upvotes: 3
|
2021/08/07
| 1,278
| 5,453
|
<issue_start>username_0: I started a Master in Engineering Management in Santa Clara University, a quite respected university in the Silicon Valley, in order to have a degree better recognized in the US as I've just studied in Brazil and France so far. I've run in a few issues to adapt to the US academic reality that are making me rethink if that was a good decision. Although the pandemic has affected some of the academics, it doesn't seem to be be much of the issue here. Here are some issues I've run into:
* Professors don't have much knowledge out of the scope of their classes. One day I asked a professor about alternative tools to the one he was teaching us and he said he haven't heard of any, despite a quick Google search showing several that I quickly tried to check that indeed they were alternatives. This shocked me as usually my professors in Brazil would be PhDs in the subjects that they were teaching undergrads and were constantly publishing papers with a very advanced knowledge of the skill they were introducing us to. So if a student wants to follow that field, they can just contact the professor that taught the introductory course.
* Grading appears to be more based on clerical work of delivering homework and finals with very little of the subject's knowledge are more about working on it. My projects, midterms and finals as an undergrad student in Brazil were much more detailed and expected the student to have a very high skill on it to earn good grades. The clerical work on writing the papers was the least of the efforts for each course. This leaves the courses of quite bland academic taste.
* While I don't hear much, I noticed that vast majority of students appear to be international students coming to graduate and get their OPT, so they can immigrate to the US. I don't see that vibe of genius students that I would expect in a respected university in an area with high demand for skilled workers. It feels more those private colleges in Brazil that people go just because they can't go to a good university so they just study enough to graduate and get a certificate.
* There is next to no activities outside the academic field I'm pursing the degree for. My university in Brazil would demand us to earn credits from at least three extra-curricular activities while at SCU the only credited (and even there optional) extra curricular activity is internship. In France we were expected to learn at least two foreign languages, do a few 'human development' courses, like arts or sports and have plenty of off-curriculum courses, like marketing, law or management, despite being an engineering school. I don't see any of that here at SCU.
I'm thinking that maybe I should transfer for a university of the UC system, where possibly there would be a more academic vibe than a OPT-driven vibe, just to provide students with the documents to allow them to work and immigrate to the US.
Me question here is, is it because private universities are there just to give students an advance STEM certificate that I'm getting bored and unable to motivate to study? Should I move to an university more academically renowned so that I could relate more and be more motivated to conclude my degree?<issue_comment>username_1: That is an awfully broad brush to use to try to paint more than a thousand institutions.
Most private colleges and universities in the US are *primarily* undergraduate institutions, though many have a small graduate division. A small number are renowned research institutions as pointed out in a comment. But most are not. Instead, their mission is to provide a good "well rounded" broad undergraduate education that enables graduates to move into a wide range of careers after graduation, including, but not exclusive to, academia. Many graduates move on to graduate school and many professors (myself) encourage graduates to go elsewhere for their grad degrees even if the school has a corresponding graduate division.
But, you may well be better off at a state sponsored institution, since nearly all of them (and all the higher ranked ones) stress research. However, they may or may not put heavy emphasis on masters level education.
But, I doubt that there are any (not-for-profit, accredited) schools in the US that have a specific goal of being an "OPT mill". I'd be surprised to learn of one.
Note that the quality of private schools can vary quite a bit, though accreditation by the traditional organizations tries to limit the range. But not every small school can provide deep specialization in *all* fields.
---
Disclosure: I have family members that attended SCU and all are happy with it and have encouraged their children to also go there. But that is for the undergraduate program and the general academic environment.
Many, perhaps hundreds of thousands or more, successful academics and high level professionals started out at such institutions.
Note that my interpretation of "private" includes only not-for-profit accredited colleges and universities of which the US has over a thousand.
Upvotes: 3 <issue_comment>username_2: Well, if you enrol in a Masters in Engineering Management, you can expect to find both the university and the students are only there to get some money. That's what the program is for.
That is not necessarily indicative of the broader environment of the university or the nation.
An "OPT Mill" would not have any classes at all.
Upvotes: 2
|
2021/08/07
| 1,506
| 5,818
|
<issue_start>username_0: I am writing my dissertation and I have a question on how to present my results.
So I have used Rstudio to write the code with which I did some statistical analysis by using some tests.
I have included all the results of the tests in a table which I have added in my dissertation.
Now, I am unsure if I need to add the code that I used as well and if I need to explain how to use it.
I don't know, since my dissertation is not focused on the code, and I could have done it with SPSS. I am not really restricted on the method of doing the tests thus I don't know how much I should focus on the code itself.
So yeah, Rstudio is just a tool, is not a requirement.
Thank you in advance!<issue_comment>username_1: If possible, include it.
As I infer from your question, the wrote *new* code which wasn't available to you from a different venue (e.g., as a package from [CRAN](https://cran.r-project.org/)) which you may cite by address and version used, and it was one of your essential tools to process the data. The inclusion of your code written into your thesis allows the members of the thesis jury to replicate what you have done (perhaps some will not pick up the code at all, others might be highly interested to check the methodologies applied). Some of the printed theses I have seen included both a briefly commented presentation of the code (e.g. *via* LaTeX usepackage [listings](https://www.ctan.org/pkg/listings), equally supporting R), as well as the relevant code copied to a CD; today, the electronic repositories by universities and their libraries may accept both the .pdf version of your thesis and this supplementary material (executable code and data within a .zip archive).
Sharing your tools used with future members of your group allows them to continue work along the direction of research, to include your methodologies in other programs. This may lead to publications with you as a co-author. Equally (if not yet part of the SI of your research publications), this eases *a lot* the replication of work you contributed by others equally known as [reproducible research](https://en.wikipedia.org/wiki/Reproducibility#Reproducible_research) (e.g., see the first part of [this presentation](https://www.youtube.com/watch?v=CGnt_PWoM5Y)) and is part of the [FAIR principles](https://en.wikipedia.org/wiki/FAIR_data). (Since you opted for a programmatic approach, quite some issues seen with spreadsheets [[an incomplete listing](http://www.eusprig.org/horror-stories.htm)] already are out of the way.)
With agreement from your supervisor/your university, you might consider the publication of your code on platforms like [CRAN](https://cran.r-project.org/) ([example](https://cran.r-project.org/web/views/Finance.html)), [GitHub](https://github.com/) ([example](https://github.com/cheerzzh/R_for_Quantitative_Finance) of a search for [finance and R](https://github.com/search?q=finance%20language%3AR&type=Repositories&ref=advsearch&l=R&l=)) with some additional effort. *As method* it may be suitable for a separate article of a non-specialized venue like [JOSS](https://joss.theoj.org/) ([example](https://joss.theoj.org/papers/10.21105/joss.00119)), or one with focus on your specialty, too.
Upvotes: 1 <issue_comment>username_2: How much, if any, explanation of the code you include in your dissertation is a question best answered by your advisor.
However, it is critical for reproducibility that if numerical simulations are done that the source code used for them is made available. With the internet and sites like github, bitbucket, etc. it is trivially simple to share code and enable others to replicate numerical experiments and simulations.
If the numerical methods themselves aren't the subject of research, i.e. you just used some standard methods in R for some analysis, a single line similar to "statistical analysis was performed in R using the code found at ." Or, if there isn't much code, it can likely be inserted directly in an appendix.
Regardless of where the code is shared, it's important to note which versions/packages/dependencies were used as well. Ideally the code should be clear and well documented, but an inordinate amount of time can be spent "explaining how to use it." If possible also store in repository or link any datasets used.
Only saying something along the lines of "statistical analysis was done in R", or "a custom implicit-explicit numerical method was used to solve the equations" really is terrible for reproducibility and should be considered bad form. As an analogy take the beginning of an experimental section of a recent chemistry paper. ([This one](https://pubs.acs.org/doi/pdf/10.1021/jacs.1c00503#) FYI)
>
> All commercially available reagents and solvents used in this study
> were purchased from TCI, Fisher or Sigma-Aldrich and used
> without further purification. Flash column chromatography was
> performed using 40−63μm silica gel (SiliaFlash F60 from Silicycle).
> Preparative thin-layer chromatography was performed on silica gel 60 Å
> F254 plates (20×20 cm, 1000μm, SiliaPlate from Silicycle) and
> visualized with UV light (254/360 nm)
>
>
>
The poor examples of numerical reproducibility would be like instead of all that thorough explaining of the materials and methods the authors just said, "We used store bought products, used chromatography with silica gel, and looked at it with light."
Upvotes: 1 [selected_answer]<issue_comment>username_3: Based on my experience, you can generate a report with figures embedded for your dissertation using r Markdown and you could deposit your code and source files on GitHub. For a pub, you may want to add this info in a supplementary file. Definitely get approval from your PI *before* releasing info to the public on GitHub. :)
Upvotes: 0
|
2021/08/07
| 1,708
| 7,159
|
<issue_start>username_0: I am a native of a South East Asian Country (3rd World) and completed my masters in mathematics in May 2021. I am looking to apply to some German Universities for PhD Program for session of 2022.
So, I am thinking of writing to those professors whose interests align with mine asking for funded PhD positions.
I want to know that which month of the session is the right time to send an email to professors in Germany. I want to work with as a PhD student for funded positions.<issue_comment>username_1: In Germany, PhD students generally do not have any course requirements (which typically would lead to a standardized starting month). Funding is usually via regular staff positions (which heavily involve teaching), or on grants. The former will be filled again when their previous holder leaves, and the latter whenever available.
As such, there is no clear "PhD application season" in Germany.
Upvotes: 4 <issue_comment>username_2: * The majority of funding for PhD students is coming from third-party funds (or from the states), for which there is no hiring season. The exceptions are graduate school scholarships, which are very rare.
* You write that you want to write directly to professors whose interests align with yours. You can do that, but you have to stand out from the crowd here. Professors get tons of badly formatted e-mails asking for PhD scholarships because their interest supposedly aligns with the one of the professor. The majority of these e-mails do not state what this interest would be, however. They are deleted in 3-5 seconds (rough estimate). So if you write an e-mail without prior personal contact, do this well to ensure that you are not seen as belonging to this group:
+ Use a proper sender e-mail address. Don't send the mail from <EMAIL>, but rather from <EMAIL>, or even better, using your current university of alumni e-mail address.
+ Use your full name as sender's name.
+ Sign your mail with the same name as your sender name.
+ Start the mail properly. E.g., "Dear Professor XYZ,"
+ Should you use HTML e-mails, make sure that they do not look like copied and pasted together. Uniform font and uniform font size are a must. No color. Allowed are itemizations, enumerations, bold, italic. No other formatting elements.
+ **Write early in the mail (first three sentences) what the area of interest is that you have in common with the professor.** Be as precise as possible, but choose topics in which the professor has at least two publications. This shows that you actually had a close look, which puts you ahead of 99% of the crowd. This will make the professor have a closer look at your "case" if they happen to have some funding. Also mention in your first three sentences what your existing expertise in the area is. Did you do research in it already? Did you write a thesis on it or a closely related topic? You want to demonstrate that you are a low-risk candidate.
+ If you happen to be in Germany already, you want to demonstrate that you know already how doing a PhD in Germany works. The fact that you talked about "session of 2022" shows that you don't, however. So look it up!
Upvotes: 6 [selected_answer]<issue_comment>username_3: Actually, I think that if you write emails like "I am X, have a master from Y and my interests are in Z" where Z are also interests of the recipient, the chances are low that you get a meaningful answer.
The university you graduated from is probably unknown to the recipient, and therefore you need to show that you are really able to do research in Z. If you published a paper or have a master thesis online, I would link to it.
Upvotes: 2 <issue_comment>username_4: **Most** PhD positions in Germany are regular (fixed-term) employment positions, and they are offered whenever a vacancy arises, i.e. there is *no recurring deadline*. Since those positions are often (sometimes must be) advertised, you should monitor relevant job bulletins. I'm not a mathematician, but I imagine the German Mathematical Society or similar groups have a newsletter or liststerv over which job postings are distributed, for example.
**Some** PhD positions in Germany are funded by stipends, in particular those at graduate schools (less common than in the US), but there are also other foundations that may offer stipends for PhD students. In both cases, positions are *offered periodically*, namely twice per year (once per semester), mostly in spring and fall. For example, the [BIGS](https://www.bigs-math.uni-bonn.de/application/) graduate school invites applications until 30 November and 30 April each year.
Upvotes: 2 <issue_comment>username_5: tl;dr: The right time is now.
-----------------------------
It doesn't matter what the current point in the recruitment cycle is. It is never too early to write, introduce yourself, express interest, discuss a potential visit, possibly be put in touch with other Ph.D. candidates in the group etc.
It may be the case that the official time to apply is still far away, but you will be making a professional contact; getting potential useful advice; and maybe find opportunity to collaborate and increase your chances of being of interest to the group. Plus, if you are asked to visit - you will get to know an additional academic environment up close.
At worst, the Professor might tell you to remind them in a few months.
Upvotes: 2 <issue_comment>username_6: In general, I think you are much less likely to get a response to a cold email like this, than applying to an advertised PhD program. (When I received emails like this as a postdoc -- which immediately was a red flag since I didn't have any hiring powers -- if I didn't delete them my response would be a link to the institute's website). The advertisements should be posted on the website of the University or institute you are applying to. There is very likely also a "job board" section of a journal or institution in your field that lists these types of ads.
Upvotes: 1 <issue_comment>username_7: As many people already said, you are more likely to get a response if you actually apply to an offered position. Nevertheless I can confirm that that PhD students are not only hired on offered positions, but also in response to cold emails or just "being around" and fitting into the group while showing good work. So in any case you want to show the professors what you have already achieved. Note that the results of your work (e.g. master thesis) are not as important as how you presented it and how it was conducted. To explain this further, in my thesis I was working on an open question and could not present meaningful results, while I was able to come up with good theoretical explanations for it.
In addition it is highly important that you do not only write to the best renowned institutions, as they usually have many more applicants (per position) than the smaller ones. Also: stay strong and do not let yourself go because of some negative or missing replies. Usually you will get a lot of those, especially when applying via cold emails without apparent positions.
Upvotes: 1
|
2021/08/07
| 1,675
| 6,940
|
<issue_start>username_0: I am doing my second postdoc under someone who is well known in the field. I did some extensive calculations and got very good results in my grad school on a topic out of curiosity. My PhD advisor was okay with it but never wanted to publish a paper out of those results as they were not his expertise.
Now when I was in my first postdoc position I had written up the entire work and asked my PhD advisor whether I should put him as a co-author. He declined to be a part of it and advised me to publish it on my own. In the meantime I moved to another position and now my advisor is a big name in my field.
I am currently in a big dilemma. Do I need his permission to publish it as a single author (as I have use the institution's affiliation) or should I just go ahead and submit it? Because my fear is that if I let him know about this paper he will certainly slow it down by being a part of it and as a postdoc I need more publications. He is not in a mood to help me out with more publications while I have already given him tons of good results. Please help!<issue_comment>username_1: Ethically, you have no need to include anyone but yourself, given the situation. Yes, you can certainly do this on your own. It isn't even a question of etiquette. It is your work, it is your paper.
But, no on can say whether it will cause you grief if you do this. If your current advisor is unethical as well as being well known then it might. But if they are honest, then they know that they have no claim to authorship here.
You could acknowledge them, perhaps, for providing you a position in which you could complete the work, but nothing more is *owed*.
The only caveat would be if you have signed away to all of your IP while you hold this position. That can happen at some industrial positions and some variations appear at some places in academia. In which case, you would need the permission of the institution to publish.
Upvotes: 3 <issue_comment>username_2: It's a serious question in research ethics. I highly recommend reading the book "[A new approach to research ethics](https://library.oapen.org/handle/20.500.12657/22482)," they cover this whole topic in chapter 3, "Publishing - Authorship."
In that book, they are referring to a very well-known guideline for authors, known as "[Vancouver Guidelines](http://www.icmje.org/recommendations/browse/roles-and-responsibilities/defining-the-role-of-authors-and-contributors.html)", created by the editors of [ICMJE](http://www.icmje.org/recommendations/browse/roles-and-responsibilities/defining-the-role-of-authors-and-contributors.html). This guideline recommends that authorship be based on the following 4 criteria(quoting):
>
> 1. Substantial contributions to the conception or design of the work; or the acquisition, analysis, or interpretation of data for the work;
> AND
> 2. Drafting the work or revising it critically for important intellectual content; AND
> 3. Final approval of the version to be published; AND
> 4. Agreement to be accountable for all aspects of the work in ensuring that questions related to the accuracy or integrity of any part of the
> work are appropriately investigated and resolved.
>
>
>
Note that, according to these rules, a co-author needs to meet *ALL* aforementioned criteria. Therefore, being a supervisor doesn't assert co-authorship by default. From an ethical perspective, your PhD advisor probably didn't felt she/he contributed enough and therefore did the absolutely right thing to let you publish without her/his name as co-author.
Regarding having your new advisor in your papers, I suggest following the Vancouver guidelines as much as possible. However, I understand in the vast majority of scenarios, especially in countries/universities/departments/groups promoting *publish or perish* cultures, the aforementioned guidelines are probably just too cute to be taken seriously for obvious reasons. If you are conducting research in such a culture, you will probably be under constant pressure to have [gift/guest authors](https://www.enago.com/academy/authorship-in-research/) in your papers all the time. To resist such unethical pressure or not, is a personal choice.
Upvotes: 2 <issue_comment>username_3: In my view, the tradition in academia is that it's normal to do some work on past projects in each new position. It's simply unreasonable to expect academic work to fit neatly within the academic calendar or progression. There is no reason to expect your PhD work to be completely wrapped up at the moment you start a post doc; no reason to expect your post doc work to be completely wrapped up the moment you start a second post doc; no reason to expect your second post doc work to be completely wrapped up the moment you start a tenure-track position. You can extend the same principle to work done under various grants.
Your new advisor doesn't have an authorship role in this work, and your old advisor doesn't want to be an author; this leaves you as the sole person deserving authorship, so you should be the only author.
I think it would be a good professional courtesy to let your current advisor know you are working on finishing a project from your time as a PhD student. In my view it would be **unreasonable** for them to be upset unless this work is some form of crankery. However, no one can predict with certainty the mind of another and not all people are reasonable. You'll have to use your own judgment.
It would, however, be reasonable for them to expect that you continue to make progress on your new work while working on this old project; it's also reasonable for them to expect that, in the future, once you leave for a new position you'll put some time in to wrapping things up for them as well.
There are many questions here about what your *affiliation* should be for that publication; my view is that you should list both your old and new affiliations, since part of the work will have been done while you were at each.
Upvotes: 3 <issue_comment>username_4: Have you ever talked to your current advisor? What can you loose showing him your current draft? You can inform him that you are going to publish results you created before arriving with his group. Or you could ask him for comments or advise where to publish. This might turn out just nicely for you.
Ethically, he cannot force you to include him as a co-author. If he has something essential to contribute, you can ask him to write a follow-up paper together to save you time and getting your work published. If he threatens you, you know at least what could have happenden if you published without notifying him. Depending on the severity, you just might add him as a second author, report him to your institution, or even look for another adviser.
I am not sure that being the sole author is worth all the hassle. Whatever he does, you can still go ahead and publish the paper.
Upvotes: 0
|
2021/08/08
| 997
| 4,073
|
<issue_start>username_0: I’m a physics student in the UK who should be on track for a first (nothing incredible, but not scraping either) at one of the top institutions in the country, for my undergrad, which is an integrated masters. I’ve recently realised how little I like the standard working world as a result of a internship, and as such I’d like to do a PhD (in preferably theoretical plasma or theoretical CMP) with the hopes of going into academia, but I’m worried that having no summer research placements will have a very negative impact on my applications. I will be doing a Master’s project next year, but it seems that the other students I know who have similar grades and also want to do a PhD are doing summer research, whilst I’m doing an internship that’s pretty unrelated. I’d quite like to do one in Europe instead of the UK, although I’m not particularly set on that, if that’s at all relevant.
I think it’s worth adding that a) I wouldn’t be able to afford a PhD if it wasn’t fully funded with a stipend, as my family won’t be able to support me, at all financially, and b) the decision to do a PhD isn’t one I’m taking lightly, it’s always been my goal but the chance to do a well paid internship arose and as a) indicates, money can be tight. Any other general advice for someone in my situation who is probably a bit behind on the application track would be appreciated.<issue_comment>username_1: A summer research placement is an advantage. But the lack of one is not a disadvantage.
A supervisor is looking for a student who will be an asset to the research group. That means motivation, energy, common sense, self-reliance and teamwork, as well as the ability to pass exams. A potential student has to convince the supervisor they have these qualities, and if they've done a placement that can be utilised as evidence. If they havn't then they just have to find other ways of showing that they have these qualities. An MPhys project is obviously relevant, but so is their CV, cover letter, what they say and how they perform at an interview/visit.
Good luck!
Upvotes: 2 <issue_comment>username_2: I'm a Post-doc in the biomedical research field. Having been in an extrememly disadvantaged position as well (third world country, extremely bad grades due to childhood trauma, undergrad at a bottom of the barrel university, masters at a modest university). I did my doctoral studies at a quietly famous European research institution in an extremely competitive frontier biomedical research area and published extremely well (journals ranked in the top 2% for biomedical research). Now, I'm working in the UK at another very famous institution and am fortunate enough to be able to network with a handful of famous group leaders across biomedical research. I'm not the brightest person in the room, however I like to think that what I lack in smarts, I more than make up for in grit, perseverance and a can-do attitude. I have a close friend from my doctoral studies whose beginnings are even humbler than mine (0 access to information, near to poverty line living).
From my experience, I firmly believe that anyone in any stage of academia is in a position to do a PhD. A single missed summer research opportunity does not mean you won't be able to do a PhD. However, you believing that statement might.
Getting selected for and through a PhD requires a lot more than a head-start.
* Show initiative. Read research articles and reach out to group leaders to understand what kind of positions might come up with them for doctoral studies in the future. If the group leaders don't reply, keep on emailing them until they do.
* Don't limit yourself to universities. Try and locate under-the-radar hardcore frontier research institutions in your field performing excellent research very quietly.
* Develop a thick skin for rejections. They are just a part of academic life.
* Develop a can-do attitude. Being able to push-through difficult and hard to solve problems that you'll routinely encounter during your PhD is very important.
Upvotes: 0
|
2021/08/08
| 4,531
| 18,123
|
<issue_start>username_0: I am a person in an Asian country who plans to write to professors in Europe for a PhD position in Math for the session of 2022. I know Europe has a lot of countries and is certainly not a uniform block but I think the way of writing e-mails must be similar.
I want to know if my way of writing e-mails is good, so that professors actually look at them and take an interest in my profile. Since I don't know any of them, I can't afford to write a not so good e-mail, especially because there are only a few people working in my research area.
This is the way I write to them. I always attach my CV to the e-mail (4 pages long):
---
>
> Respected Professor,
>
>
> I am a resident of [my Country name] and I completed my masters in mathematics in June 2020 from [My Institute Name].
>
>
> I am looking for PhD positions in Algebraic Geometry.
>
>
> Since June 2020, I took a break to self study more mathematics courses but could not apply anywhere in 2021 due to personal reasons.
>
>
> Upon learning about your work I went through your research satement and found it to be aligning with my interests, hence I would very much like to work with you. Kindly find my CV attached along with this e-mail. Can you please tell me if there is a vacancy for new PhD applicants in your working group for session 2022?
>
>
> I am available to discuss the possiblilty further and look foreward hearing from you.
>
>
> Yours sincerely
>
>
> [My Name]
>
>
>
---
This is the e-mail I plan to send to all the professors I am interested in working with. Please let me know your thoughts on this.<issue_comment>username_1: **No, the email is not good at all.**
First, one step back: Why are you emailing? In your email, you are asking whether there is a vacancy in the research group. If there is one, it will be advertised whereever PhD positions for the country/subject combination get advertised. So look there instead.
Second, while mentioning algebraic topology puts you [ahead of the curve](https://academia.stackexchange.com/questions/41687/what-is-behind-the-indian-undergrad-research-experience-spam) (assuming you indeed write to someone doing algebraic topology), the whole "our interests align"-bit is utterly generic. If you are indeed telling the truth, be specific! Be very, very specific! If you are lying here, don't.
Minor nitpick: Delete the "Since June 2020"-paragraph, this isn't important enough to take up the very valuable space in a good cold-calling email.
**Conclusion**: Taking into account Point 1, it only makes sense to cold-call potential PhD advisors in your situation if you are hoping to search for funding together with them. This would consitute a significant time investment for both of you, so be aware that you are asking for a lot here. So try to identify one (maybe two) potential supervisors that are an awesome fit for you, and make sure that the awesomeness of the fit is clear from the email. Also, do look for advertised opportunities and apply for everything which is a decent fit.
Upvotes: 3 <issue_comment>username_2: for an equivalent request I went for something similar,
here is your mail with some changes to the form so it s more "professional"
tips:
* take care of the words/courtesy rules you use
* take care in being precise
ps: there might be some "minor" mistakes in my writing because english is not my main language ;) I let you double check everything
---
Dear Professor [Name],
Currently living in [Country, City], I completed a master in mathematics [be more accurate, that's important for them] in June 2020 at the [Institute Name, Country, City].
Upon learning about your work I went through your research activities and found it to be in the domain I am looking for. I am looking for a PhD position in Algebric Geometry [be more precise] for next year and would love to work by your side.
For more informations about me, please find my resume attached along with this e-mail.
I am available to discuss with you and look forward hearing from you.
Sincerely yours,
[Name]
---
good luck for your application
Upvotes: 0 <issue_comment>username_3: First, I want to emphasize that composing such a “generic” message that you plan to distribute without some case-by-case modifications will not work very well. You have to make any such emails *personal* to you and to the professor.
This means making it explicitly clear that you have read some papers by this person: something as vague as “alignment” of research interest will not get you brownie points. Some better would be “I have read your recent paper [give citation], and found [Theorem] particularly interesting because… “ or something along those lines.
Next you need to make this specific to *you*. Why are *you* interested in working with *this* professor? What courses or related skill set (including self taught material) or just intense interest supported by courses or experience do *you* have that are useful for work or study in this area. In other words, why should *this* professor invest time in mentoring *you*?
Finally, excuses like “personal reasons” have IMO no place in such letters. You don’t want to tell your life story: most people aren’t interested, at least initially. *If* this person brings up the topic, you can expand on personal reasons in a follow up emails.
Upvotes: 2 <issue_comment>username_4: **No, this e-mail is not good at all.** The other answers have already explained why not. But you say you want a "canonical answer," so let's go through line-by-line.
>
> Respected Professor,
>
>
>
In English-speaking North America and Europe, this is already a red flag. I see this often, so I assume it is the proper form of address in some English-speaking country (India?). But in North America and Europe, this is not how we address mail. Worse, the mails we do receive that begin with "respected professor" are almost never worth reading; so, we have learned to trash such mails without reading them.
The proper form of address may vary by country; "Dear Professor X" or "Hi Professor X" should be OK throughout North America and Europe. Note, you should put the actual name into this block; otherwise, I assume you are just copying-and-pasting a generic response to hundreds of people (at which point, I immediately stop reading and trash it).
>
> I am a resident of [my Country name] and I completed my masters in
> mathematics in June 2020 from [My Institute Name]. I am looking for PhD positions in Algebraic Geometry.
>
>
>
Here is the key point you need to internalize: **most readers are just going to skim the first paragraph and then press delete.** Your readers receive a ton of mail, much of it from prospective students, and do not have the time or inclination to carefully review all of them.
So, your job in this mail, and especially in this paragraph, is to convince them that:
1. You are very impressive, and
2. Your interests are very well-aligned
If you fail on either of these counts, they will delete without responding. In fact, a pretty high percentage will delete without responding even if they are convinced, but there's nothing you can do about that.
So, your first paragraph needs to impress them. And I mean *really* impress them. Something like: "I am looking for PhD positions in Algebraic Geometry. I have 3 papers in [impressive journals], have won an award from [somewhere] (the most prestigious body of mathematics in my country), have the equivalent of a 3.91 GPA, and am particularly interested in [some niche topic]." Note that you are:
* being concise,
* listing the specifics right here in the first paragraph, and
* using a language the professor can understand (e.g., converting the GPA to whatever scale is used in the professor's country, explaining that your award is from the most prestigious mathematical body in your country, etc.)
I realize that you may not have any super-impressive accomplishments like this, but you need to list whatever you do have. Put it all out there in this mail, because if you don't, you will not get another opportunity.
>
> Since June 2020, I took a break to self study more mathematics courses
> but couldnot apply anywhere in session 2021 due to personal reasons.
>
>
>
Nope. See above. In this e-mail, your one job is to volunteer information that will impress them. This information is not impressive, so don't volunteer it. Delete this paragraph completely.
>
> Upon learning about your work I went through your research satement
> and found it to be aligning with my interests, hence I would very much
> like to work with you. Kindly find my CV attached along with this
> e-mail. Can you please tell me if there is a vacancy for new PhD
> applications in your working group for session 2022?
>
>
>
This is so generic it is completely meaningless. What part of my "research statement" aligned with your interest? Again, I assume you are just sending this to hundreds of professors. Instead, you should briefly explain why in particular you want to work for *me* -- which of my papers did you read? Why did you find them interesting? What related work have you done? If we worked together, what topics could we pursue?
Also, I wouldn't say "I would very much like to work with you." You don't even know me! Instead, say that you're interested in "exploring this further" or "discussing future possibilities." Yes, it may be that you are desperate and willing to take anything, but you need to conceal this; desperation is not attractive, and professors receive tons of mail from desperate people.
>
> I am available to discuss the possiblilty further and look foreward
> hearing from you.
>
>
> Yours sincerely
>
>
> [My Name]
>
>
>
If you are asking for a job, it goes without saying that you're available to discuss, etc. It is important that your mail be very short (just 2 or 3 short paragraphs), so don't waste any space with generic statements like "looking forward to hearing from you." Delete all this (except your name). Seriously! I realize some cultures require polite, effusive farewells, but in Western Academia, putting your name alone is perfectly fine, and it's the best option in many cases.
Finally, I would be remiss if I didn't point out that there are several misspellings in your mail. I realize English may not be your native language, but you should get your mail proofread by a native. Most mails that contain misspellings also highly correlated with mails I don't care about, and so most academics have learned to hit the delete button after the second or third error.
Upvotes: 3 <issue_comment>username_5: I don't think this is a good letter. I would personally leave out the bit about the study break as irrelevant. I can see how others may disagree.
More importantly, you're not telling the prof what you're looking for. You need a funded position, but you're not asking for a funded position. When one needs money, it is important to ask for it. You will only create ambiguity and confusion resulting in unnecessary communications if you don't ask for what you need.
Upvotes: 1 <issue_comment>username_6: As others have said, this would either get either automatically filtered or manually put into the junk folder.
>
> Respected Professor,
>
>
>
While I believe this is the correct address (after translation) in a number of Asian countries this is unusual in western countries. I wouldn't say it is technically wrong (though it does suggest that you may be sucking up to this person - a negative quality), but it does show the minimal effort has been put into this email. With this form of address you could spam many people with the same email, so any recipient may assume that is the case. You can assume: "Dear Prof X", is the most formal and respectful form of address in western countries, and would be more respectful than "Respected Professor" since it demonstrates you've at least taken the time to get their name right. Though be careful to not refer to someone who isn't a professor as a professor, that indicates that you didn't actually spend time finding out about them. If you happen to email anyone like that you should use "Dear Dr. X" instead.
>
> I am a resident of [my Country name] and I completed my masters in mathematics in June 2020 from [My Institute Name].
>
>
> I am looking for PhD positions in Algebraic Geometry.
>
>
>
What did you do in your masters? Did you write a thesis? If so what was the topic? This is the point where you have a chance to intrigue the recipient, why should they bother to consider you over any other random person who might contact them. The topic of your masters' thesis gives them some information about what you might contribute and if you're actually a good fit for their group. By giving the topic you demonstrate that either you know enough to tell that these topics are related, or that the person you wrote to can safely ignore your application as irrelevant.
If you are hoping to change fields you may want to add an extra (brief) sentence explaining why you want to now work in algebraic geometry. Or if your thesis topic convinced you to work in algebraic geometry you might add a brief sentence about that as well.
>
> Since June 2020, I took a break to self study more mathematics courses but could not apply anywhere in 2021 due to personal reasons.
>
>
>
Probably too much information, but in any case the next thing you need to do is explain why you are contacting this person for a PhD opportunity.
>
> Upon learning about your work I went through your research satement and found it to be aligning with my interests, hence I would very much like to work with you. Kindly find my CV attached along with this e-mail. Can you please tell me if there is a vacancy for new PhD applicants in your working group for session 2022?
>
>
>
Too generic, you could use this sentence with anyone you emailed and so haven't demonstrated that you are invested in joining their group (otherwise you would go into more detail). You need to explain to them why them, not in the sense that they are very respected in their field, but rather how do your interests align with their interests.
This section should be targeted at whoever you are emailing, and if you didn't need to spend much time on it, then its probably bad (unless you happened to do your masters' thesis on their work). Because you need to spend more time on this section, you probably won't be emailing as many people as you otherwise would. But this forces you to only email the most appropriate potential supervisors, and so the people you email can be more confident that you may be a potential student/it's not a waste of time to look at your CV. Think of this like game theory if you're familiar with it: by specialising each email to its recipient, you are making it so that you've wasted more time if they reject you. If the person you're emailing sees this signal and knows it can't be falsified (such as by using a template) then they know they work on something interesting to you, rather than just working in your field.
You should also be careful not to show a lack of attention if their website says: "we are currently looking for PhD students, see [site]", "We have no positions available at this time", "PhD inquires are always welcome". In the case of no positions available you may want to inquire about whether they anticipate any opening up in the near future, or if they know of anyone else in a related field who is looking, or about to begin looking, for PhD students. It might also be worth asking them about this if you are cold-emailing them and they haven't said that they are looking for students or that inquiries are always welcome.
>
> I am available to discuss the possiblilty further and look foreward hearing from you.
>
>
> Yours sincerely
>
>
> [My Name]
>
>
>
For the ending I would say "Sincerely," rather than "Yours sincerely," sounds more appropriate. Also "I am available ..." doesn't sound right to me, though at the very least it should be "possibility" and "forward".
---
In addition to the above, I'm going to take a guess that there are a couple of cultural differences impacting how you wrote your letter and are leading you astray for contacting potential European supervisors. I would be interested to hear if you think any of these may be correct/relevant in your case.
The first guess is that standardised tests/grades probably contribute less towards who gets hired/accepted in western cultures. When hiring someone, you want to know that you chose the best person, but this is based on what knowledge/skills/attitude the person could bring moreso than on what standardised tests say. Because filtering based on this takes more time than filtering according to grades (which are easy to order), each applicant costs the professor more time to decide upon. Therefore the expectation is that applicant provides stronger signals that this is important to them (by spending more time preparing the email in such a way that it will be wasted if they are rejected), as well as providing more information hinting towards their knowledge/skills/attitude and how it could fit into and help the group.
While I would be incredibly surprised if which people are hired as PhD students is decided solely by the grade they received in undergrad/masters, I could believe that there might be more leeway if everyone who applies is influenced by an assumption like this.
The second guess is that face-saving practices are much less common in western countries than eastern countries. I'm wondering if the expectation of face-saving may lead you to writing more general statements which have less of a chance of being wrong or exposing a mis-understanding that you have. Since a loss of face is not as bad in western cultures the expectations may be that prospective students take more risks where they might expose ignorance, but if they are good then they will demonstrate a better understanding of the topic than they otherwise could.
Upvotes: 3 [selected_answer]
|
2021/08/08
| 1,807
| 7,947
|
<issue_start>username_0: I recently reviewed a paper for the second time. The authors competently solved all issues that were raised in the first review round, and the only remaining issues were minor grammar and spelling errors (resulting from the authors not being native speakers I would guess). I marked those errors and nevertheless handed in the review as **accept**, because even though there were those minor issues, those seemed like the kind of things that could also be corrected in proof-reading, and did not have anything to do with the content or the overall quality of the paper itself.
After handing in the review a couple of weeks ago, I just received the paper for review for the third time, and as all reviewer comments are added at the end of the manuscript, I could see that none of the other reviewers had any more comments. This means that the extra round of reviewing was basically caused just by me, resulting in another delay for the authors until their paper will be published.
I now wonder if it was right to address those minor errors (resulting in further publication delay), or if it would have been ok/better to "overlook" those, since they might be caught in a subsequent proofreading stage (which will happen anyway no matter how many rounds of reviews take place). I am asking this, because I know that the (at times) very time-intensive publishing and peer reviewing process can be stressfull and unnerving.<issue_comment>username_1: It doesn't sound like you have anything to feel bad about. You classified the review as "accept", clearly indicating that you didn't need to see it again (even though there may have been typos, etc, to fix before publication). It was the editor who decided to waste time by sending it out again despite this.
I think it is definitely worth commenting on these issues. By doing so, you give the authors chance to correct them when submitting the final version, which they can do in their own time. If you fail to point them out, the best-case scenario is that they add a lot of extra changes at the proof stage, and since journals often have very tight deadlines for checking proofs, and they may arrive at an inconvenient time, the authors may not have time to check the proofs carefully.
Upvotes: 7 [selected_answer]<issue_comment>username_2: >
> I now wonder if it was right to address those minor errors
>
>
>
You were right to point out the errors in the paper. Errors in grammar and spelling are distracting, and can sometimes confuse the reader - especially if they're struggling with the material itself. Plus, after all, you did accept.
>
> Resulting in another delay for the authors until their paper will be published.
>
>
>
The authors can put their paper on arxiv.org or their homepages in the mean time.
>
> I just received the paper for review for the third time ... I could see that none of the other reviewers had any more comments.
>
>
>
If the editor decided to hold the paper over for a third round of reviews strictly due to comments on spelling and grammar, then it's the editor's mistake, not yours. The editor could have accepted subject to proofreading, which doesn't (typically) need reviewers' involvement.
Upvotes: 4 <issue_comment>username_3: I think that both the journal and the author, not to mention readers, would appreciate you marking the errors. Some reviewers make very specific comments about language errors, as in "Change 'interview' to 'interviewed,' p. 30 line 6." Others (the majority) make only a general comment: "This paper contains many language errors. Please have it carefully checked by a native English editor." Whether you flag each error or make a more general comment may depend on the number of errors and how much time you have. Perhaps you could also recommend publication condition that the errors be corrected. Then it's up to the journal editor whether the corrections get checked in a subsequent review. I've certainly seen a great many errors in papers by non-native English writers. Standards and extent of checking vary---some journals apparently have a high tolerance for errors. Your thorough approach is better for all concerned.
Upvotes: 2 <issue_comment>username_4: Maybe it is just me, but depending on the severity, let it go.
I once had a 2 Months delay because of minor mistakes.
Like, I forgot a comma in a place not obvious, even after letting 5 people proof-read before handing it in.
And then there was a sentence that was grammatically correct but not "good". After 2 months he finally accepted it but told that my grammar should be reason for me to give back my title.
Pendantic people like that can really get you down and it hindered me in so much work and I got anxiety. I always had to wait days just for something like "oh it is a reverse sentence and that means that you should this word instead of that".
I have written a paper about a software project, not about the german language.
On the other hand if it is really obvious errors, like misspellings or wrong use of words it should be corrected.
Upvotes: 1 <issue_comment>username_5: >
> minor grammar and spelling errors...might be caught in a subsequent proofreading stage
>
>
>
If there is a discrete proofreading stage in your publication cycle, then it seems to me that "minor grammar and spelling errors" would routinely be corrected by whomever does the proofreading work.
(As an aside, it strikes me as strange that you wouldn't already be aware of this, and that it's not already part of your understanding of the workflow at the location where this paper is being considered for publication.
If you know that there is typically a "proofreading stage" then what else would you expect them to do, other than correct "minor grammar and spelling errors"?)
Upvotes: -1 <issue_comment>username_6: Although grammatical and spelling errors are relevant, pointing them out is often the lowest kind of value an expert peer reviewer can add. That is a proofreader's job. Although this point might be controversial, I consider it a waste of a reviewer's time to try to point these out in detail, and although constructive, it is probably the least constructive of any constructive aspect of a review report.
That said, this does not mean that such errors should be ignored. While I do not pick through such errors when I review an article, I do say something about them. I have one of two standard ways of handling them as I review an article:
* **If grammatical or spelling errors are so severe that they hinder my understanding of the article**, then I consider that a rather major issue and then address it as such, along with my other critiques. In particular, I make it clear that I cannot appropriately evaluate the article [that is, I cannot recommend acceptance] without these points being corrected and clarified.
* **If grammatical or spelling errors do not signficantly hinder my understanding of the article**, then at the end of the review, I write something like this: "There are some grammatical and spelling errors in the article. For example, [I always give one or two examples]. I recommend that you send the final version of the article to be professionally proofread." Note that I do not ask them to have the next revision professionally proofread. It would be expensive and unreasonable for them to pay to proofread each revision, especially if the errors do not hinder my understanding.
Upvotes: 0 <issue_comment>username_7: One approach, which has often worked for me, is to explicitly tell the editors that you feel the paper is already in good shape. Something like:
>
> The authors have ably addressed all of the reviewers' concerns. I have a few minor suggestions (listed below), but these are entirely at the authors' discretion. If the other reviewers are also satisfied, this paper should proceed straight to publication; I do not need to review it again.
>
>
>
Upvotes: 1
|
2021/08/08
| 552
| 2,466
|
<issue_start>username_0: I'm working on a number of projects right now, some are near "completion" (as much as that is ever possible!). I've been thinking a lot about preprints lately. When would the appropriate time to upload one? At the same time as you submit a manuscript to a journal? My fear is that all my previous work has changed quite a bit in the review process (which seems normal enough), so if this is normal for my work should I avoid preprints at this stage? What if the results are modified by incorporating suggestions of reviewers?<issue_comment>username_1: I don't know about ecology specifically, but in biology in genearl (my experienece is molecular and computational biology and genomics), I always submit a preprint when I first submit the article to a journal for review. Yes, the paper sometimes changes between submission and final publication, but that just gives you something new to talk about in your tweet-torial about the final publication that was different to when you tweeted about the preprint!
Almost all preprint servers will allow you to upload new versions of papers after initial submisison, but be aware that some journals allow you to submit articles that have been preprinted, but then do not allow the upload of updated versions after that point.
Upvotes: 2 <issue_comment>username_2: 1. As was said in a comment, some journals don't accept papers that are published as preprints (although it seems to become unusual these days). So if you have one or a few journals in mind, check their policy before putting out your preprint!
2. You don't want to have a preprint out there that is embarrassing, so wait with publishing a preprint until you're really convinced of it (and maybe have the material discussed with a few people).
3. If these issues are out of the way, I'd say publishing a preprint is fine and sometimes you want to do it early in case others work on the same thing and a time stamp can tell that you were there first. Also, if your preprint is out some time before submission, you may discuss it with some more people and improve your journal submission. (I do realise that one may sense a slight contradiction to the first issue, but everything can be improved, even if you are at some point convinced that it's good enough!) People also may cite your preprint in case the publication process takes so long that the journal version doesn't exist at the point where others want to cite you.
Upvotes: 1
|
2021/08/08
| 1,403
| 6,005
|
<issue_start>username_0: I have been applying for academic jobs which are admittedly a bit out of my league. I have a Bachelor degree from an elite (ie world top ~3, depending on ranking) university. My course had an interruption in it, which I spent working for an academic institution (and I got paid).
I have been finding academic jobs in the past by emailing professors if I can work for them. Most of the time the answer was no, but there were always at least 1 person who would give me a job in the past, when I was looking for one. I think this is probably because I was lucky enough to be enrolled to an elite university (yes, I think that is at least 90% luck, but [that's another topic](https://www.youtube.com/watch?v=3LopI4YeC4I)).
I know that people who work for research instituions have usually higher qualifications than only a Bachelor. However, when I was given a job, I was always paid.
Recently, I have encountered a new situation: the people I applied to were keen on giving me tasks, and they seemed to think I am qualified to do it, but they didn't think they need to pay me. They thought they are mentoring me maybe, or something like that. Ie their idea of my participation was that I work under their guidance: I get their guidance & the possibility to contribute to research papers, they get code I write.
While this would be perfectly ok for me if I was rich, I cannot afford to have a job which does not pay. (I don't mind if it pays badly, I am not looking for an extravagant lifestyle, but I do want to keep a roof over my head.)
**Previously, I thought that this goes without saying: if you don't think I can do a good enough job for me to be paid, then just don't hire me. If I do work, I get paid.**
Is this approach too arrogant? Is it normal for slightly underqualified people to be working for academic insitutions without being paid?<issue_comment>username_1: No it would be illegal for them to hire you to work and not pay you regardless of your qualifications or the work you are doing.
But given your level of qualifications there may be some confusion with the people your contacting, who may think you are asking for a research project as part of your degree. Thus they are assuming you are already being paid by whatever funding model your country uses for degrees, so they don't need to pay you further. You, perhaps, need to make it clear to them what your are looking for is not a research project for your degree but instead an actual job.
That said you may find it hard to get an actual job, as they will have a large pool of people who will do the work for free (as part of their bachelors or masters degrees).
Upvotes: 4 [selected_answer]<issue_comment>username_2: It is *very likely* a communication problem as username_1 suggested.
If it is a full-time job, you should get paid. You will also have tasks that do not benefit you at all. Usually these jobs are advertised so if you made contact, this may not be the first thing the professors you contact think of, therefore you should clarify from the start that you are looking for this type of work.
It is also common, especially if you contact the professors and do not apply for an advertised RA job, that they offer you affiliation and a project but you work as much or as little as you want and only on your own project, and it is unpaid. Mostly students choose this option who want better chances to apply for PhD positions or who want to do their thesis project.
Our group has both kinds of research assistants. It is always discussed before they join the group what they want and what we can offer so there are no misunderstandings.
---
Edit: added *very likely* to first sentence.
Upvotes: 2 <issue_comment>username_3: Whether student work is paid or unpaid generally depends on the type of work and the rewards it gives to the student. It is not particularly unusual for academics to offer to do joint research projects with students where the only consideration offered to the student is experience/credit on the project. In some cases the student contribution might be sufficient for co-authorship or acknowledgment in a paper, and for lesser contributions it might just give the student some experience and a useful academic reference. Unless the contrary has been agreed, this is not so much "hiring you for work" as it is "offering to let you participate in a research project".
Academics usually have plenty of students willing to work with them for free (including PhD candidates with more experience than it sounds like you have) so cases where the student is unpaid are common. Indeed, some students may wish to participate in a research project as a way to get experience to apply for a PhD candidacy or a paid research position, particularly if they are able to get co-authorship on a paper. Within employment law, such arrangements are often considered to be a type of "volunteer work", or "vocational placement", or an informal "joint venture", rather than an employer/employee relationship. (Contrary to another answer here, failure to make payment for work of this kind is usually not illegal, so long as it meets the requirements that put it outside the scope of an employer/employee relationship.)
There is nothing wrong (or arrogant) with you wanting only paid employment, but because unpaid work in this area is common you should be up-front about your desire for paid positions when you email academic researchers about work of this kind. From a practical perspective, it is also worth asking yourself whether you will get any success seeking paid employment of this kind ---i.e., are you easily replaceable by an unpaid volunteer with the same skills. Given your (relatively) low qualifications you will probably find that there are other students with the same or better qualifications willing to work for free on research projects. Still, if the academic in question considers your work sufficiently good you might be offered a paid position.
Upvotes: 1
|
2021/08/08
| 409
| 1,699
|
<issue_start>username_0: Is it okay to just rewrite whole chapters from a book while describing a theory you are using in your Bachelor Thesis? Is it okay to only have two books as references and few (3-5) papers on arxiv to cite?<issue_comment>username_1: You need to beware of copyright law here. A "rewriting" of a chapter could be construed as a derived work, for which a license is needed. It is better to formally quote small excerpts as needed. Even paraphrasing, if excessive, can involve copyright issues. And of course, you need to cite what you use to avoid plagiarism.
The number of books/papers that you need to cite is best asked of your advisor(s) and what they will accept. It is a purely local question.
Note, however, that some things fall under the "common knowledge" exception to IP conventions. You can write your own understanding of things you've learned if they are common knowledge. Most undergraduate material (certainly not all) fits this exception.
But, for practice in learning to be an academic, if that is your goal, a rather formal approach (quoting, citing) is probably in your interest. The reader is pointed to the originals and can consult that as necessary.
Upvotes: 3 [selected_answer]<issue_comment>username_2: Don't quote or paraphrase whole chapters: extract and rework the material you actually need.
Whatever sources you use, be sure to cite them properly.
Whether those chapters and the several arXiv references are "enough" depends on the subject matter, the kind of dissertation, and the standards in your program. Talk to advisors in your department and look at other theses submitted at your school to get an idea of what is expected.
Upvotes: 2
|
2021/08/08
| 876
| 3,599
|
<issue_start>username_0: I'm editing a quantitative paper in sociology based on a Statistics Canada survey. The survey uses the term "visible minority" in a yes/no question asking if participants identify as a member of a visible minority group. This term is used across many Statistics Canada publications, but "visible minority" is rejected by many scholars today and for that reason I don't want to use it in the paper.
In the statistical demographic chart in the paper I propose to use the word "Race" to designate the demographic category (along with other categories like age and gender) and "Racialized" and "Nonracialized" as the two possible subcategories. I would then add a footnote to explain that these terms diverge from the terms used by Statistics Canada, giving the reason for the changes. Does this seem like a good solution?<issue_comment>username_1: If you don't mind wordiness, try "Racial/ethnic self-identification". If you want to be extremely cautious, you could also try "Cultural self-identification", though that risks creating a certain amount of confusion. The 'self-' qualifier isn't strictly necessary, but does help assuage any worries that people are being objectified as something other than how they see themselves.
In either case you can use standard racial and ethnic nomenclature: White (Caucasian descent), Black (African descent), Hispanic, various Asian labels, etc. I'd avoid using 'non-white' unless you are constrained by the data. 'Non-white' is a tone-deaf relic of colonialism, implying as it does that the only important or relevant distinction is whether one is white.
P.s. And yes, incidentally, there is no such thing as race from any objective scientific perspective. The human genome is extremely restricted: a typical species of bird has two to four times the genetic diversity of the human species, and we don't see any need to divide *those* up into races.
Upvotes: 1 <issue_comment>username_2: I’m not a sociologist, so take my opinion with a grain of salt. But it seems to me that the correct term to use *in the writeup* is exactly the same term that was used when the data was collected. That is, if the question on the survey form people who participated in the study answered was about “race”, then that is how you need to report the data. If it was about “racialization”, that is how you should report it, etc.
If you don’t use the identical language to what was used in the data source, you risk causing confusion, distorting the meaning of the data, and opening yourself up to accusations of making the change in the reporting because of some personal or political agenda, and/or accusations of being a sloppy researcher.
Now, if you yourself disapprove of the terms that were used in the data source, or don’t want to be seen as endorsing them, you can use a device like “[(sic)](https://academia.stackexchange.com/q/94282/40589)” to emphasize that the term originates in the data you are using.
**Edit to address rephrased question:** using a footnote instead of “(sic)” is also okay, but in my opinion the footnote should explain what the actual terms used in the survey are and not just say that your terms diverge from the original ones. The main point is to be transparent with your paper’s readers so they have the information they need to accurately interpret the data you are giving them, and aren’t led to suspect you of any weird shenanigans. If you cite data from a survey, you have a scholarly responsibility to describe it accurately even if you disapprove of some of the labels used.
Upvotes: 4 [selected_answer]
|
2021/08/08
| 588
| 2,475
|
<issue_start>username_0: I spent 7 years after high school working in hospitality, customer service, and semi-skilled manual labor. I returned to school got a bachelor's and later a masters, and now work in a research lab. I do not include any of my pre-respecialization work experience on my resume or LinkedIn. I also do not include my high school graduation date. As such, someone reviewing my resume could mistake me for a different candidate, one who was more focused and went right into the field. In some scenarios (application/interview), I do not do/say anything to discourage this assumption. I have a youngish face, though in terms of energy level I'm sure it comes through. Obviously, co-workers have an idea.
I feel that some cases of respecializing are respected, such as transitioning from one trained, professional role to another. People change, and taking steps to align your career with your priorities and interests, if these fall out of alignment is looked at favorably. Meanwhile, "menial" jobs (retail, customer service, food service, construction in some cases) regardless of the reason a person might stay in these jobs, are a looked at as lost time, and in most cases, are best glossed over or avoided.
Anyone have thoughts on this? If your resume/publications/accomplishments etc. do not suggest your age, should you fake it where you can? (as many people are most productive early on in their careers, ageism, etc)<issue_comment>username_1: It depends on what you mean by "fake it". You need to be honest, but you don't need to volunteer things that aren't required. Nor do you always need to give the most complete possible answers.
But honesty is required.
I think most people will accept you as you are, not looking for ways to put you down over issues like age. There are exceptions, of course, but more people "do the right thing" than not. And a lot of people will have gone through some of the same things. Not everyone's path is smooth.
Upvotes: 4 <issue_comment>username_2: I think you are looking from a subjective perspective.
Everyone perceives from a different point of view. For example, your work in "hospitality, customer service, and semi-skilled manual labor" moved to research, it shows that you have the ability to adapt to the new environment, which is a good thing in research.
Sell your past with better words rather than hiding it.
Be honest! there must be a post/job that will appreciate your "agility".
Upvotes: 2
|
2021/08/09
| 1,537
| 6,972
|
<issue_start>username_0: I have research and reached logical and impressive results, but there is a problem that one of the predictions of my theory is that antimatter cannot be found on Earth and cannot be made, and this makes scientific research and experiments that talk about finding antimatter wrong. Could it really be that? The research is wrong and they lied to us about antimatter, or am I wrong? Is it possible to accept a scientific research that contradicts previous scientific experiments and is in line with other experiments?<issue_comment>username_1: It is highly unlikely that anyone will accept theory-based research that concludes that a large range of experimental results are impossible to have been observed, even though multiple groups report observing them.
You would have to offer an alternative explanation for why those groups have observed what they have, or accuse not just one scientist, or even one group, but multiple groups across the world of lying, which is unlikely to ever be accepted.
In almost all cases in science, experimental evidence trumps theory.
Upvotes: 5 <issue_comment>username_2: It's certainly possible to accept research that falsifies several experiments.\* Einstein's theory of relativity has come to be accepted even though it falsified mountains of experimental evidence for Newtonian mechanics.
But there's one very important thing Einstein's theory did: it is able to explain why previous experiments saw the evidence for Newtonian mechanics. It is able to explain why those experiments didn't see deviations (e.g. speed not large enough), and it is able to predict where to look to find deviations. Put differently, Einstein's theory reduces to Newtonian mechanics in the limit of small masses and small speeds. If it didn't, it would never work.
Any new theory needs to be able to explain why the old theory seems to work. If your theory claims that antimatter does not exist, it must also be able to explain why so many experiments, conducted by different people across different generations, appear to see antimatter. It must be able to explain why PET scanners used by doctors works even though positrons don't exist. If it's not able to do this, your new theory is dead before it even begins and no physicist will pay it serious attention.
---
\*Here by "falsifies several experiments" I mean "falsifies a theory that was previously accepted as the explanation for several experiments".
Upvotes: 3 <issue_comment>username_3: This is more an explanation than an answer.
For a theory to be *valid* it has to be able to explain all observed phenomena, even if only approximately. "Approximately" since we can't measure at an infinitely fine scale. So, the theory that the Earth's orbit is elliptical is valid, but only with certain assumptions and only at a certain scale of measurement. Perturbations occur, for example. But a "theory" that the earth's orbit is circular would not be valid, since observations contradict it.
However, one theory can be replaced by another if the new one provides better explanations for all observed phenomena, in particular those that weren't able to be made in the time-frame of the earlier theory. At one time the theory was that "atoms" were fundamental. That theory has been replaced since new measurements called it into doubt and newer theories (quarks...) provide better explanations. But the new theory still has to be able to explain older phenomena to be valid.
There are two ways (at least) that one theory can be considered "better" than another, and opens the possibility of one replacing the other. The first is that the new theory provides better (more accurate) explanations of existing phenomena, and especially a better way to predict what is likely to occur, even if only approximately (remember the importance of "scale" as above).
The second way that one theory might be considered superior to another is that if it is conceptually simpler. It has to pass the "accuracy" test, of course, but scientists prefer theories that are simpler and with fewer interacting elements.
Now, to the question at hand. If you can create a theory that can provide a model for existing phenomena but that doesn't need "anti-matter" as part of the explanation, then it might be simpler. It isn't a question of the "existence" of anti-matter, it is which theory can better explain the phenomena and, within those boundaries, which is simpler. Such a theory, if *valid* (see above) would make questions about the "existence" of anti-matter moot. But no such "theory" would be valid if it predicted things known not to be true. So, the answer to the question as you posed it, would be no.
And the new theory will only be known to be valid at a certain scale of observation and might be invalidated by new observations. The universe "is what it is". Theories are an attempt to explain it, not define it. The universe is "messy". Theories try to make sense of it to make the complexity understandable.
Upvotes: 2 <issue_comment>username_4: Experiments are the arbiter of what's correct. Given this, an experiment cannot be falsified by theory, as many commenters have said.
I'd like to add another angle to the existing responses. Experiments need to be *interpreted* in the light of a theory. In other words, most experiments need a theory to be put in context. So, both experimental questions and answers need to be translated into the language of the given theory. This is not always an easy task.
To give an example: when you do General Relativity Theory (GRT), the theory determines not only the dynamics of the objects, but how your observations are going to look. If you assume that GRT is flawed, you have to correct not only the dynamics, but also what you expect to see with your measurements.
A related problem is the discrepancy in the Hubble constant as delivered by various measurements; it should be unique, but different measurement techniques give different and incompatible results. So, either we do not understand the measurement theory correctly, the measurements themselves are flawed, or - but very unlikely - the Hubble constant is not a well-defined unique concept, which would indicate new physics or at least a substantial reinterpretation of GRT. However, that last is the least likely - it is far more likely that the experiments themselves or the tools to evaluate the experiments are flawed somewhere. See also the apparent discrepancy of proton sizes depending on measurement technique: more careful measurements have now confirmed that proton size is consistent across measurement techniques.
Bottom line: updated theories *can* provide better interpretations for existing experiments which explain discrepancies, permit to resolve inconsistencies or suggest alternative experiments to be tried. A theory that just makes existing experiments look more inconsistent is worse than being wrong - it's completely pointless.
Upvotes: 3
|
2021/08/09
| 791
| 3,174
|
<issue_start>username_0: PhD days are important in ones life. They influence the research career as well as the academic career of a person to a great extent. In many cases it influences the personal life of a person to a large degree.
The quality of a PhD, in general, is assessed or understood by the thesis submitted by the student. So, thesis is a sole outcome in a documented form. It is then utmost important for a student to frame/write her thesis with great care. I am in search of abstract qualities/characteristics required for a PhD thesis.
The following are some characteristics of a PhD thesis I am aware of
1. **[Novelty](https://dictionary.cambridge.org/dictionary/english/novelty)**: **the quality of being new** or unusual, or a new or unusual experience
This characteristic says that the thesis pushed the boundary of the literature to certain extent which has not been pushed earlier by any other (at-least not documented).
2. **[Original](https://dictionary.cambridge.org/dictionary/english/original)**: **An original piece of work**, such as a painting, etc. is produced by the artist **and not a copy**
This characteristic is almost similar to the first one but the uniqueness of this characteristic is that it ensures that the author of thesis did not copy the work from any other and the novelty is originated from her only.
I may be unaware of any other characteristics that are required by a PhD thesis in general. Are there any other characteristics to keep in mind before starting thesis for PhD?<issue_comment>username_1: Originality and novelty are required, but not sufficient. In most fields, *usefulness* is just as important. But usefulness can be interpreted many ways. One is just that it contributes to understanding of a field. More important, when it occurs, is that it enables and opens the door to further explorations.
Very occasionally, however, a dissertation (or in general, any paper) can unify disparate threads of a field in which the existing theories leave gaps of understanding.
But, in some way, a "good" dissertation will push back against the darkness at the edge of understanding of a field, if only in a very limited sense. It should answer an interesting question for which an answer is not previously available.
Upvotes: 3 <issue_comment>username_2: My university defines a thesis worthy of a PhD as:
>
> A candidate for the degree of PhD, PhD with Integrated Studies, MD,
> DDSc, DMedSci, EdD, DEdCPsy, DClinPsy or EngD is required to satisfy
> the examiners that his or her thesis:
>
>
> * Is original work which forms an addition to knowledge
> * Shows evidence of systematic study and of the ability to relate the results of such study to the general body of knowledge in the subject
> * Is worthy of publication either in full or in an abridged form
>
>
> In addition, the form of the thesis should be such
> that it is demonstrably a coherent body of work, i.e. includes a
> summary, an introduction, a description of the aims of the research,
> an analytical discussion of the related findings to date, the main
> results and conclusions, and sets the total work in context.
>
>
>
Upvotes: 3
|
2021/08/09
| 4,036
| 16,806
|
<issue_start>username_0: Conferences are a great platform to exchange ideas and to learn about different research areas. Also, more importantly, conferences expedite the process of publishing as journals may take months/years.
However, my question is - "why was is it mandatory to attend a conference in person in order to publish a paper in some xyz conference in pre-pandemic times?"
There are many people constrained by factors like money/time/family and struggle to attend conferences. They do prefer publishing their results at a conference in order to expedite the process of publishing but find it extremely difficult to travel to a foreign country. Conferences are conducted in extremely extravagant venues and attending just lectures costs 500+ euros, leave aside food and stay. Conferences are often held in big cities so lodging is also tough. Travelling to remote countries is not only expensive but also has a large environmental and climate footprint.
In order to grow in academia, it is important to publish fast. But why does research publishing forces constraints like this, and why nobody talks against this? Why cannot there be choice given to authors to not visit if there are other constraints.
Will post-pandemic era be different even slightly different, and will we be inspired by the online system of conference existing in current times.<issue_comment>username_1: You point out the advantages of getting together at conferences. Prior to the internet this was the most efficient way for groups of scholars to meet and interact. That has changed, of course.
But, the fact remains that publishing isn't free. There are certain costs, perhaps a bit lower for online than print, but they are still there. Someone has to cover them and usually the funds come from a variety of sources.
For example, large conferences in CS normally have corporate sponsors who contribute to the costs. But attendance fees also cover a large part of the conference and subsequent publication. Members of organizations like ACM also contribute to publication costs through membership fees.
You also point out a problem. Costs, especially travel and housing costs, can be prohibitive for some people, especially if the conference is far away or there are disparities in the economies of different places. This problem doesn't have complete solution.
One possible solution would be for governments to get more involved, though that might also have downsides. But publication fees could be supported (in theory) by taxation, spreading the costs over more people.
Grants to individuals are possible in some places to cover conference fees and travel, but this needs to be planned for. The grants might come from governments (such as the NSF in the US) or from private companies. I once had such a corporate grant that I could use for travel.
Some academic institutions will also provide funds for conference travel, usually assuming the presentation of a paper.
Some conferences, but not the big ones, are held on university campuses during vacation times, reducing costs. But a large conference needs a large venue, with a lot of support activities, and those are, as you say, somewhat extravagant.
My personal view is that education and scholarship, including publishing, is insufficiently supported by government (taxation), and that is a US view. More, in my view, needs to be done to assure a better future.
---
A note on the pandemic:
When the pandemic hit it was realized that things had to change and that standard practice couldn't be used. But the "compromises" made were always known to be imperfect and experimental. No one really had a very good way to carry on as in the past. But "let's do the best we can with what we have" let us stumble through the chaos. Either we eventually go back to what we had before (conferences) or we design a new way to interact remotely that doesn't have so many downsides. We don't know yet, which outcome will occur.
Upvotes: 3 <issue_comment>username_2: >
> Conferences are conducted in extremely extravagant venues and attending just lectures costs 500+ euros, leave aside food and stay.
>
>
>
The cynical answer is that in-person attendees pay fees that fund the conference.
Typically conferences involve people presenting the work in some fashion. Even now, video conferencing tools can be a bit unreliable. I'd say they've improved substantially since the pandemic encouraged a lot of investment in that area, and it works well with other trends in the tech industry like cloud services. That said, I think **people attending a conference in person want to hear in-person presentations**. If people show up at a conference where half of the presenters are remote, they might wonder why they bothered to come in person.
As @RichardErickson points out, in some fields conferences are not the norm, or at least are not the "most respected" venue for publication. So I'd say the causation is a bit upside down: you're asking "why are conferences this way" when you could possibly see it more directly the other way: "'conferences' are what we call it when work is presented this way". You could come up with alternatives, but A) Those alternatives already exist, and B) They just aren't called conferences.
Upvotes: 6 <issue_comment>username_3: To answer your question with a question, **what would be the point of publishing a paper at a conference that you don't attend?**
I suspect this question is highly dependent on field, but in the natural sciences, I struggle to imagine what would be gained by submitting a paper to a conference that you do not attend. You cannot give a talk, you cannot rope people into your poster, you cannot network, and you can't do anything fun. Basically, you'd be throwing your idea/work out into the void--albeit a void where your work is now technically public, but not published in a journal, with none of the upsides. Why not just submit a paper to a journal at that point?
Upvotes: 4 <issue_comment>username_4: Having been involved with the organization of a number of conferences, I observe that this comes from the peculiar combination of computer science publication culture and venue contracts.
For most other fields, this would simply never arise as an issue, because conferences don't have "real" publications, and thus there is no point to sending a paper to a conference that you don't attend. Because conference publications "count" in computer science and a few related fields, however, there is a strong incentive to publish but a weaker one to actually show up and give the talk.
So if you didn't force people to show up and give the talk, then a significant fraction of papers might not be presented. That's embarrassing for a conference, but worse, likely to be disastrous for its finances. And that is because of the contracts conferences have to sign with the venues that host them.
Most places that will host a conference, like hotels or convention centers (and even many universities!) will require a certain minimum revenue from the conference, based on the number of days and space that is being occupied. This can be structured in a number of ways, e.g., number of rentals from the room block, total cost, etc., but the bottom line is that for a typical conference the majority of your registration fee goes straight to the venue. And if not enough people show up, the conference still has to pay that minimum cost and will go into debt. Most conferences are thus always walking a fine line between higher registration fees (which attendees will resent) and higher risk of financial disaster (which may kill the conference). Requiring an author to attend (or at least pay a registration fee) is one way to help make finances a little more predictable.
During the pandemic, a lot of conferences have, in fact, successfully gone virtual, and I expect that some will continue that way. But people do still want and need to meet in person for a number of reasons, e.g., you can't have hallway conversations when there are no hallways! And any conferences that meet in person will, no doubt, be right back in the same tension between venues and attendees.
Upvotes: 5 <issue_comment>username_5: **tl;dr: Many people like in-person conferences, and mandatory attendance for authors makes it easy to justify funding of conference trips.**
In-person conferences are popular. One thing that many people like about them is that they mean a context change: people leave their familiar daily environment with its many distractions and constraints. Some people say that in-person conferences are the time when they actually get some serious work done.
With the mandatory attendance model, having a paper at a particular conference is often seen as a ticket for a funded trip to that conference: funders and institutions won't ask questions why the trip is necessary.
In contrast, virtual conferences have often been regarded as an additional burden to accommodate in an already-full schedule, rather than a relief.
Should models with non-mandatory in-person attendance (hybrid conferences) become more common, people are worried that this will negatively impact the support from their funders and institutions to visit conferences in person, because funding a whole conference trip is generally more expensive than just the registration fee for virtual attendance.
Upvotes: 3 <issue_comment>username_6: There's many excellent answers here, which give some of the reasons why conferences were held in person before the pandemic, but one thing I'll add is that even before the pandemic it was not always mandatory to attend the conference in person.
### I absolutely love this example from 2016:
At the most prestigious annual conference on Adiabatic Quantum Computing, one of the most prominent researchers in all of quantum computing ([<NAME>](https://scholar.google.com/citations?user=6WYsaqMAAAAJ&hl=en), a tenured professor at MIT, who you might know as the 'H' in the [HHL algorithm](https://en.wikipedia.org/wiki/Quantum_algorithm_for_linear_systems_of_equations)) gave a lecture virtually at the in-person conference. [You can see the entire talk](https://youtu.be/t8BPaY06XCo?t=14), including the speaker on Skype (not Zoom) being introduced by the famous [<NAME>](https://scholar.google.com/citations?user=keJ0TbgAAAAJ&hl=en) who is often credited with coining the term "Adiabatic Quantum Computing" which is the name of the conference. **He gave the talk virtually because he had recently had a baby.**
### It also happened in completely different research fields:
In *chemistry*, for many years before COVID, a Director at a Max Planck Institute named [<NAME>](https://en.wikipedia.org/wiki/Ali_Alavi) gave lectures through Skype at various conferences in the United States while he was in Germany. Also, [<NAME>](https://en.wikipedia.org/wiki/Terry_Rudolph) famously wrote in [this paper](https://arxiv.org/abs/1607.08535):
>
> "This article is the extended text of a talk I planned to give a couple of places in the United States this year, but will not do so now having been denied a visa (apparently no immigration officer likes to hear the words “Iran” and “physics” in the same sentence)"
>
>
>
after restrictions were placed on visiting the US, for anyone who had been to Iran since 2011, like <NAME> had in this case. This was even more severe when <NAME> (who was living and working in California, at University of Southern California) and [<NAME>](https://scholar.google.ca/citations?user=KhCiiawAAAAJ&hl=en) (who was living and working in California, as a senior researcher at Google) could not attend a conference in 2018 **that took place in California** (the country they were living in!) because it was held at NASA which is an "[independent agency](https://en.wikipedia.org/wiki/Independent_agencies_of_the_United_States_government)" but still an agency of the US Federal Government, and therefore didn't allow people born in Iran to attend. Mohseni ended up choosing not to give a talk, [Marvian gave a talk](https://riacs.usra.edu/quantum/aqc2018/quantum-computing-conference--speakers.html) but it wasn't made available on Youtube; but <NAME> who was a Canadian citizen and *came to the conference* from Canada and was at the conference dinner held off-site, had to still give his talk "virtually" because he wasn't allowed inside of the NASA building where the conference took place: he literally gave [***this virtual talk***](https://www.youtube.com/watch?v=d2EcEETZyXY) from just a few meters away from the audience (you can hear at the beginning of the linked Youtube video, the session chair saying "Pooyah will now give a talk from wherever he is right now").
### So it wasn't always mandatory, even before COVID
I've given examples from several conferences since 2016, across more than one research area, where people attended talks virtually because of recently having a baby, or being restricted from attending the conference because of where they were born, or in one case it was a British-born citizen who *unusually* was denied entry into USA essentially because he'd visited Iran since 2011. The reasons why attending conference in-person was always preferred (before COVID) are very well spelled out in the other answers, and I agree that those are completely true, but if you could not attend the conference for some reason, it was often (though probably not always) made possible to attend virtually instead.
Upvotes: 2 <issue_comment>username_7: I am using the present tense, as if we were in pre-pandemic times.
In the field of engineering and computer science, almost all conferences make it compulsory for one of the authors to come to the venue and present their paper. Without this constraint, a conference would make much less sense. The very reason for conferences to exist is that people gather together, do social networking, get to know each other, listen to authors presenting their own work and have a chance to make public and private questions to them, or have conversations and exchange ideas and possibly projects with them. If you cannot do that, going to a conference loses half of its attractiveness.
That's the reason why I think that conferences will strive to keep working like that again, as far and as soon as possible.
Upvotes: 1 <issue_comment>username_8: I work in a company that does a full spectrum service for academic conferences starting with newsletters, paper submissions, review up to registrations and scheduling. I am the developer by my colleagues do the venue planning. Been to a few conferences in very different fields as a result.
Some observations
a) Often conferences are held at the city where a major research institute in that field is located. This means visits to the labs or other pilot projects in the area are part of the program. This is useful to the attendees and I know attendees who attend just for that reason.
b) Conferences have a big role in exposing up-and-coming researchers to other institutes, a large part of the papers are PhD students that publish the first or second time and want to make a name for themselves. Nothing beats standing in front of the room and answering questions from the world leader in the field, letting him or her get to know you. If this was online the meetings never take place.
c) Some industry associations have board meetings once a year and co-locate it with conferences for this reason.
d) Some conferences are held in exotic locales for exotic reasons to be honest. But in the end most of them are close to a major research institute.
e) The people who make the money are the venues. They are crazy. Once we had to run a conference at a venue that wanted 100$ per week to rent a MOUSE. Never mind a presentation laptop. nowadays this is not a problems, laptops have become much lighter and presenters need less tech support. Powerpoint has actually improved. In those days this was a problem, but in the end it was cheaper to upgrade all of our companies laptops than to rent them on-site for a week. The institutes and organizers make very little profit.
f) The entertainment and networking are the most expensive part. I have been at a conference in applied physics that involved dinner for about 500 people that cost more than 100000$, which is a bit ridiculous. All of those conferences fees go straight towards the room rent and food you eat, no matter how bad it is.
But in the end, in-person conferences do have a function, especially if it is located close to an institute. I know of several cases where students got to know their advisors or got post-PhD jobs because they were at the right place at the right time and got to know the right people.
Upvotes: 2
|
2021/08/09
| 990
| 4,243
|
<issue_start>username_0: I am doing my masters in energy systems engineering. Whenever I cite a scientific paper that I found online, I include the link where I got it and date accessed in the citation, as well as the DOI, and publisher.
I was discussing this with a classmate, since he just cites the article without including a link, just the journal name, and volume, as well as providing the DOI.
I was wondering what was the correct way to cite an online scientific paper. We are often asked to cite in IEEE format.<issue_comment>username_1: "Correct" citation style is determined by the journal or publisher. That may or may not include a link.
Whenever it's allowed, I recommend including the link and the last accessed date, as a courtesy to your reader.
Upvotes: 3 <issue_comment>username_2: The doi already doubles as a link to the article, and any good bibliography style will make that clickable. Moreover, the doi comes with a commitment that it will work in perpetuity, while the url on the publisher website could change. As such, there is no real benefit in including a link to the publisher website if you already provide the doi (a link to the arXiv could be a different story).
Giving the date you accessed it seems pointless, too. Since journal articles are supposed to be versions of record, and modified only via errata after publication, the situation is fundamentally different from citing a more fluid source such as Wikipedia.
Upvotes: 5 [selected_answer]<issue_comment>username_3: Echoing other answers and points: yes, good to give a DOI, since it supposedly lasts forever. On another hand, for immediate access, giving a URL (qualified by the access date) is useful and informative.
I am not a fan of official rules about bibliographic references (since it's really not clear to me that everything can be fitted usefully into existing rules), but I can understand the impulse to make some universally interoperable version. My joke-quip about this and other things is "well, let's just wait until things stop changing ..." :)
Upvotes: 2 <issue_comment>username_4: There are lots of references style for writing an article. In many of them, authors mention article link, DOI, PMCID, PMID. These styles are chosen by the scientific journal manager or editors. In medical journals are common to use AMA style for writing and Vancouver style for reference section.
For example in Vancouver style in some journals, you need to write journal abbreviation name instead of full journal name and also mention DOI and article link, at the end of the each reference while another journal need PubMed codes (PMID and PMCID).
>
> In conclusion, journal authors guideline is where you should read and
> It says the use of links are mandatory or not.
>
>
>
Upvotes: 2 <issue_comment>username_5: Having read and graded thousands of student papers, the biggest citation mistakes I see involve citing the site of download instead of the actual data of the publication. When the students list Library of Science instead of the journal name, it tells me they don't understand why they're writing it down at all.
I couldn't care less about whether a link is provided or not, so long as the citation is correct. If the citations are not correct, students will feel it clearly in the grade.
Upvotes: 3 <issue_comment>username_6: As I stumble a lot over this questions: Use a bibliography tool and add everything you have as metadata, then let your lecturer decide on the citation style and just use that. Perhaps make an example export and ask if it suffices.
The wonderful thing about bibliography tools is that it is one line of configuration or one selection in a pull down to change to the defined needs.
Of course a DOI is an URI and thus extremely helpful, but what information should be put where (e.g. in-text, in-footnote, in-bibliography) is a highly artificial choice. And a DOI as being unique could replace all other then redundant metadata, like URL, date-accessed or others.
There is no and there never will be a consent about how to style citations (as you can already see in all answers and comments). So just let others decide and use a software configuration which circumvents these problems.
Upvotes: 1
|
2021/08/09
| 1,193
| 4,583
|
<issue_start>username_0: I find researchers and writers who published very little, yet were influential in their respective fields of study, interesting. A good example of such a researcher was the mathematician <NAME>, whose motto was "Pauca sed matura" ("Few but ripe"). Authors who seemed to have adhered to a similar motto were <NAME>, <NAME>, and <NAME>.
What are some good examples of scientists and other academics who were not prolific, yet were nonetheless able to leave their mark?
Let's set the bar the for maximum number of publications at 20, so as to include more possible scientists as answers to this question<issue_comment>username_1: Arguably, <NAME>, who started the field of information theory. <NAME> is another.
Upvotes: 3 <issue_comment>username_2: Two mathematicians, both working in geometry/topology and both with <20 papers:
1. <NAME> has been extraordinarily influential, but only has 17 papers on MathScinet. Much of his most important work (e.g. on the Casson invariant of homology 3-spheres) was written up and published by other people.
2. <NAME> is also a very important mathematician. MathScinet lists him as having 20 papers, but one of them is his thesis (which is not a real publication -- the main result was published in the journal Topology), so he has 19 "real" papers.
By the way, I didn't have to remove Casson's thesis since he did not have a PhD. He did attend graduate school at the University of Liverpool, but his dislike for writing began early and he didn't write a thesis.
Upvotes: 3 <issue_comment>username_3: [<NAME>](https://en.wikipedia.org/wiki/Henry_Moseley) had a very brief career and [published only eight papers](https://www.jstor.org/stable/228365) before being killed during the Battle of Gallipoli in 1915. In Isaac Asimov's estimation, "his death might well have been the most costly single death of the War to mankind generally." However, the works Moseley did complete included pioneering applications of X-ray spectroscopy in physics that, among other things, established the concept of atomic number as a non-arbitrary quantity, refined our understanding of atomic structure, and let him predict the existence of several new elements.
Upvotes: 2 <issue_comment>username_4: <NAME> had only a small number of publications (in fact, really only one major publication, "Versuche über Pflanzenhybriden"—"Experiments on Plant Hybridization" in English), but he laid the groundwork for the modern understanding of genetics.
He was not influential in his own time; his work was, in fact, all but ignored when it was published in the 1860s. Most of the leading biologists of the time, including Darwin, were unaware of Mendel's experiments. However, in the early twentieth century, Mendel's work was rediscovered and duplicated, and since that time it has been incredibly influential. Had his work on pea plant genetics gotten the attention it deserved at the time, he might have been able to continue his work; as it was, when he became abbot at St. Thomas's Abbey in Brno in 1867, he no longer had time to pursue his experiments himself.
Upvotes: 3 <issue_comment>username_5: <NAME> had a tremendous influence on set theory but published only 11 papers. (MathSciNet lists 14, but one is his thesis and two are solutions of problems in the American Mathematical Monthly.) His contributions include the theory of indiscernibles for the constructible universe (later named 0^#), the theorem that the generalized continuum hypothesis (GCH) cannot first fail at a singular cardinal of uncountable cofinality, the "countable vs. perfect" dichotomy for co-analytic equivalence relations, the proof that GCH holds in the model of sets constructible from a normal measure, and the proof that analytic sets have the Ramsey property. Each of these has been the foundation for a great deal of continuing research.
Upvotes: 1 <issue_comment>username_6: Though Gauss published little in his lifetime, he *wrote* [quite a lot](https://en.wikipedia.org/wiki/Carl_Friedrich_Gauss#Writings), and many of his writings have been subsequently published and are now publicly available --- his are quite substantial. In any case, an obvious place to look would be major mathematicians who died young. For example, [<NAME>](https://en.wikipedia.org/wiki/%C3%89variste_Galois) made a major contribution to abstract algebra and then died in a duel at the age of twenty, so consequently his published output was small, despite its large impact.
Upvotes: 2
|
2021/08/09
| 2,227
| 9,134
|
<issue_start>username_0: How do university professors earn money external to their salary from the institutions where they teach?
As far as I understand:
1. Obtaining funding from government/companies for doing research.
2. Forming their own consultancy firms or doing freelance consultancy for external entities.
Am I correct?
What other viable sources do they have for earning money?<issue_comment>username_1: At US universities most tenure-track positions are nominally 9 or 10 months, and allow faculty to do certain other work in the summer. In addition to summer research salary paid by grants, some other common sources of summer salary are teaching classes in the summer, organizing summer schools or REUs or similar events, or some kind of summer consulting for industry or government.
Upvotes: 3 <issue_comment>username_2: A select few university professors make big bucks from the royalties on a basic widely-used textbook.
Note I said "select few". Most academic textbooks make only minuscule amounts in royalties.
Upvotes: 2 <issue_comment>username_3: >
> 1. Obtaining funding from government/companies for doing research.
>
>
>
This would not usually give the researcher more money, except in the long-term indirect sense of advancing their academic career and helping to get a promotion. Research grants are paid to the university employing the researcher and they will generally pay some or all of the salary of the researcher. The academic does not get money directly from this --- instead, the university takes the money and uses it to subsidise the salary costs of the researchers on the grant. (There are some exceptions to this, such as direct summer salaries in the US.)
Of course, if a researcher wins a major grant, this can be used to make a case for promotion from the university, and it can therefore lead to an indirect wage raise for the academic in question. For a researcher who is not already a full professor, winning a major research grant will usually lead to a promotion and consequent pay raise, though the pay raise is modest relative to the amount of the grant.
>
> 2. Forming their own consultancy firms or doing freelance consultancy for external entities.
>
>
>
Many academics have strong technical expertise in areas that are useful to external entities, and some make money from external consulting work. Universities sometimes even allow academics to engage in external consulting work as a portion of their work time, which means that some academics can make money from consulting work within their ordinary academic hours.
Though I do not have data on this kind of work, my observation is that only a small proportion of academics do external consulting work outside their regular academic job. Even for those that do this, the income is highly skewed --- most academics who do external consulting earn a modest amount of money from this (substantially less than their academic salary) but a few make big money. In my observation the latter are mostly academics who work in economics/finance and moonlight working for big finance companies, or engineers doing work for big-name industrial firms.
>
> What other viable sources do they have for earning money?
>
>
>
Other avenues relating to academic work are writing textbooks or popular books, creating blogs/websites/YouTube channels, etc. Again, only a relatively small number of academics earn any serious money doing this, but a few manage to make big money. Beyond these items, academics can of course apply for second jobs just like anyone else, but this is rare, since academic work tends to bleed into weekends and holidays already.
Upvotes: 3 <issue_comment>username_4: In addition to what has been stated above (No, you don't get income from government/company grants, yes a few get a modest amount from consulting, you can earn some royalties from books), it may be possible to earn something from patents. Rules vary from university to university, but at my institution licencing income from patents is shared between the named inventors and the university. Again, it is very rare to earn substantial amounts this way, but I do know of one academic who has earned tens of millions from a patent they took out while working for the university. The only other academic I know with a patent earns only a nominal amount from it though.
Upvotes: 0 <issue_comment>username_5: The standard sources I know about from rather direct knowledge:
1. summer income from a grant
2. royalties from writing an advanced monograph
3. Teaching a class beyond one's teaching load
4. Short-term work for certain government agencies
5. Taking on an administrative role for a few years
6. Writing a review of a textbook
I should point out that the royalties thing does not work that well anymore for reasons you can find on Google. Also (1) and (3) and (5) may not be possible in many locations. In particular, (1) seems to be mainly a US thing.
Upvotes: 2 <issue_comment>username_6: I feel a lot of the existing answers are rather US-based. Here is my experience in Europe:
* **External work** - my university allows external employment for up to 20% on top of my university duties. Some people have small appointments at companies, some do some teaching at other universities, some run their own small company (often offering consultancy services), but most do not use this 20% at all (since 20% more work on top of an already demanding university job tends to not leave a lot of time for family and hobbies).
* **Monetizing your research** - more of a theoretical option for most people, but in principle my university claims no IPR (intellectual property rights) for any research we do. Hence, you are free to, for instance, patent some of your work and licence it to companies. In my field (computer science) this is exceedingly rare (I know nobody who has generated noteworthy money that way), but in other fields this may be a realistic option.
* **Paid service** - there are some "academic services" for which you get compensated. Examples include reviewing for some grants or serving on external PhD or appointment committees. That said, the amount of money you are going to get this way will likely be at best a small bonus, not a real "income stream".
* **Administrative posts** - some higher academic posts (say, department head and upwards) come with an increased salary while you are serving. However, you should not expect getting rich this way either, it's normally a rather small bonus to your standard salary, nothing that will substantially change your life situation.
Notably, since we have 100% contracts the notion of paying yourself from your own grants does not exist. There may or may not be schemes to siphon off money from grants into your own pockets, but these are either downright illegal or at least heavily frowned upon.
All in all, for most people the salary that they get from university is the money they make, with the equivalent of a few hundred USD per year in bonus income from routine external service. Exceptions exist, but these people do something unusual in addition to standard academic work (such as running a successful company on the side).
Upvotes: 4 [selected_answer]<issue_comment>username_7: Some of these answers are only half true. These days, university faculty have to look to more than the university for their revenue stream. Its true that a faculty member can have a grant subsidize your teaching, which means that you aren't teaching and can spend all your time on research, which furthers your career, your reputation, your promotion or your next book. However, I have received grants where I received no time off from teaching and have instead received as much as 20-25% extra salary, which when making over a six figure salary is no small chunk of change. For some academics, book sales can also be quite enriching--think of <NAME>'s How to Be an Antiracist or Stamped from the Beginning. If you write for a more popular audience, and then pursue multi media around your book project, you can make a decent revenue stream. Even if it is just an academic book, I net about $1,500 a year from my latest book and have done so for the past five years. Then, many academics command speaking fees--$2,000-$5,000 for someone of note, and $10-25 K for someone with really big national standing. And that's for every single talk.
Other academics I know have established lucrative consulting firms, and I have seen some do quite well, from economic policy firms to historians advising Hollywood on historical films. My university has no rule on how much one may make on a separate consulting firm. All in all, the days of academics quietly sitting in their office contemplating deep thoughts over theoretical works is largely done and over with. The public largesse has largely abandoned academia and academics, and we have to fend for ourselves. Many academics at top institutions have come to understand that the university is only part of their salary base. The neoliberal university has adopted the business model, and so have a lot of academics.
Upvotes: 2
|
2021/08/09
| 2,032
| 9,137
|
<issue_start>username_0: My co-authors and I sometimes have different views on the organization of our papers (in mathematics). While some of them insist on writing down all the details, I try to keep emphasis on the main lines of thought and avoid diverging from them whenever possible.
This difference of opinions comes up in several aspects: whether to discuss all the results (positive and negative) that we obtained in the project or select only the most important/interesting; how much details should the proofs contain; etc.
My two questions are:
1. if there is a way to make a paper contain all the details and at the same time have a clear structure not obfuscated by these details? Are there any organizational tricks that help to find some balance?
2. reading and writing research papers, what do you value more its comprehensiveness or clarity?
I would like to emphasize that this question is only about what should appear in the final draft after all directions of research have been explored and all proofs have been completely checked and re-checked.<issue_comment>username_1: There is no hard-and-fast rule, because good communication is as much an art as a science. However, in my view, the majority of mathematics papers are written with too little detail. It is possible to give too much detail, but in practice I almost never see it. So I would encourage you and everyone else to err on the side of more detail, rather than less.
If you are worried about details distracting from the main thrust of the work, the best way to mitigate this is to organize the paper very clearly, with lots of signposting. Divide long arguments into lemmas whenever it makes sense to do so. This allows the reader to skip the details they don't want to read.
Upvotes: 3 <issue_comment>username_2: Realistically, none of your readers will read past the introduction. Write a long introduction with your main lines of thought. For a long paper, people frequently now even write an introduction to the introduction, so that the first page or two says what the paper is about, then the next four or five papers lays out the main ideas of the paper, and then the remainder contains all the details.
Particularly technical lemmas whose proofs are long but not enlightening can be proved in sections at the end of the paper. In particular, you do not need to always give the proof of a statement before you use it, though if you defer a proof you should be careful to make it clear that you do not have a circular proof.
As to what level of detail to include, my suggestion is to pick some specific person as your intended reader for the paper, and include all the details that will be necessary for them. Do not pick the leading expert in your field or one of your close collaborators. Frequently it is best to pick a graduate student, and it's okay if you're not more specific than X's hypothetical PhD student who is just starting research.
Upvotes: 3 <issue_comment>username_3: #### Appendices, footnotes/endnotes and supplementary materials are your friends
I am in your camp here, insofar as I prefer to write in a way that "flows" well by sticking to the main lines of thought and avoiding diverging into side issues that distract the reader. The first three rules of good writing are: *clarity*, *clarity*, *clarity*. Consequently, almost every mathematics paper I write shifts substantial amounts of material out of the main body of the paper. I like to put proofs of theorems in appendices (unless the proof method is important to understand the material in the body), side aspects of the material and minor caveats into footnotes, and coding and other materials into supplementary materials. This allows me to write in a way where the main body of the paper flows naturally and gives the reader a clear and intuitive discussion of the research problem at issue.
* **Don't be afraid to give heuristic assistance/discussion:** At the extreme end of brevity, some mathematics papers and notes are essentially theorem-proof-theorem-proof, etc., and it can be difficult for the reader to see how the authors figured out to use the methods under use. In my view, it is useful to break this up by giving at least some heuristic discussion to assist the reader to understand why a particular theorem or method of proof was chosen, and what motivated it. It is okay to speak heuristically here, and a small amount of additional detail on motivations, methods, etc., can make a world of difference in understanding the topic.
* **You can often put proofs in an appendix:** In some cases a mathematical proof can highlight important aspects of the subject under discussion (and in this case you might want it in the body of the paper), but often the proof is there just to ensure that the theorem is proved. Moving proofs to an appendix allows you to just assert your theorems and then discuss their significance, and interested readers can look up the proof if needed.
* **Footnotes/endnotes can be used to prevent interruption of arguments:** Often you want to have a main line of discussion where you speak in general terms, in a way that does not tax the attention of the reader. In some cases there may be some minor technical caveats or other minor points you want to make, but you are afraid that they will interrupt the flow of argument. In such cases, footnotes provide a useful way of dealing with a minor caveat without interrupting the flow of argument for the reader.
* **Supplementary materials are good for, well, supplementary materials:** Some mathematics papers are augmented by computer simulations, reproducible coding, etc., and in such cases these should generally be made available in supplementary materials. It is rarely useful to include large amounts of computer code in a paper (unless part of your task is to explain how to implement something in computational software) so it is usually best to move this out of the body.
Upvotes: 3 <issue_comment>username_4: I would say that clarity should be the king in the sense that if you want the paper to be read beyond the introduction and built upon, the best thing you can do is to present the main idea (or ideas) in *the simplest nontrivial setting* where you can spell everything out with all due details without making it a technical nightmare.
The standard trap people fall into is trying to present the strongest and the most general result they can prove. Nine times out of ten it turns out that the direction of the generalization they chose is *not* the one the other people get interested in later and one has to spend hours or weeks undoing all the sophisticated build-up and reducing the proof to its core from where everything else has to be rebuilt in a completely different way. So clearly presenting this core from the beginning would be very beneficial, IMHO. The word "clearly" means "with all relevant details and with minimal number of outside references"+"in as easy to comprehend way as possible".
By tradition, a mathematical paper has to contain some new result (preferably a solution of some reasonably old unsolved problem) just to prove that the techniques you promote are worth something but, for majority of articles, the results themselves are of much less interest than the new twists in the proofs. Your "central theorem" will, probably, be either forgotten or greatly improved in a few years but some little lemma may easily get an independent life and the simpler that lemma is, the more chances it gets (provided, of course, that it is above a certain threshold in terms of novelty and ingenuity).
The main "organizational trick" is to try to write about one thing a time. There are cases when you can and should set up "explosive fireworks of ideas" but they are rare, few people are capable of that and even fewer know how to do it in a way that illuminates things rather than blinds and confuses the reader. So, the (in)famous KISS principle is a good rule of thumb on most occasions. That one and the classical "divide and conquer" approach can work miracles as far as the clarity of communication is concerned. I heard once that a bad lecturer is always thinking of whether he forgot to include something into the course and a good one is worried if he put in something unnecessary. The same applies to mathematical writing if you want your papers to be read rather than just quoted in long reference lists under the category "and Vasya also was there and did something that nobody has ever really looked at but it would be impolite not to mention him".
Proofs should be complete and fully spelled out. It is often said that two omitted trivialities in a row can create an impenetrable barrier to the reader. Prove a weaker or a less general result than you can, if you feel that the argument becomes too entangled otherwise, but prove it in full and spend some time thinking of how to present it in the most logical and easiest to understand way. One hour spared by the writer often results in a few days wasted by the reader (which should also be multiplied by the number of readers).
Upvotes: 3 [selected_answer]
|
2021/08/10
| 517
| 2,130
|
<issue_start>username_0: In mathematics, it is standard to use "we" instead of "I", even in single author papers.
Is this the case for job and grant applications as well?
For instance, suppose I wish to say something to the effect of:
>
> To study (problem), **I** will extend (technique) from **my** past work [citation].
>
>
>
Here are several considerations, although I'm sure there are many more:
* I want to emphasize that I am the one driving the proposed work forward. Using "we" in place of "I" emphasizes this less. On the other hand, I do intend to collaborate with others on this work.
* The past work is not mine alone, even if I was the "main" contributor (whatever that means). I could omit "my" entirely, but the fact that (technique) is something I've worked on before is important in justifying why I am suited to study (problem), and I don't think people will necessarily take the time to read the reference entry. I could also be more explicit and say something like "past collaborations", but this comes at the cost of wordiness.
TLDR: Is there a convention for pronoun use in job applications and grants? Are there any resources which discuss best practices for writing choices like this in job and grant applications?<issue_comment>username_1: I don't know how universal this is, but I have generally observed people using "I" in job applications, and "the PI" in grant applications, in place of the "we" that is used in papers.
Upvotes: 2 <issue_comment>username_2: In general using "I" in job and grant applications is perfectly fine.
For grant applications one possible alternative is to introduce an acronym or short name for your project (this often mandatory for EU grant proposals). Many instances of "I will do this/I will do that", can then be replaced by "PROJECT will do this/ PROJECT will do that". Besides getting around the sometimes awkward use of "I", this also helps emphasize that funding the project is essential to get these things done.
(The use of "I" on the other hand can help emphasize that you are a crucial ingredient in bring the proposed work to a success.)
Upvotes: 1
|
2021/08/10
| 1,146
| 4,965
|
<issue_start>username_0: I am currently a first-year Ph.D. student in computer science in Europe. I have published some papers in the top-venue conferences.
Recently, we are looking into a new field that requires strong mathematical background, but unfortunately, my math skills are not that steady. Whenever I encounter something I don't understand, I am always confused about whether I should have learned it in the basic courses and don't know how to catch up with it. I am also worried that my supervisor would think me less qualified if I admit that I am not sure about something.
I am wondering how people manage this. Should I just be strait-forward with my supervisor? What would he think of me and what would he probably do?
--- Edit ---
Thanks for all of your replies and suggestions. I especially appreciate the answers/comments from the perspective of supervisors since I really have no idea how my supervisor would react to it. I am probably overthinking and feeling less confident in this new field. It is also good to know that I am not the only one who faces this problem.
After all, I think I would first try to find some references and teach myself first though it may be very challenging to me since I am sure what courses/subjects would cover the specific techniques I need for my research. If it still doesn't work out, I would show what I have tried during the past week and see if my supervisor could point me to any further resources.<issue_comment>username_1: >
> I am wondering how people manage this. Should I just be strait-forward to my supervisor? What would he think of me?
>
>
>
Yes, you should be straight-forward with your supervisor. No good will come out of pretending you understand something, or know how to do something, when you really don't. Even if they "think less" of you, they will find out eventually, and this way you have at least done the right thing and alerted them of the problem as soon as you noticed it.
>
> What would he probably do?
>
>
>
Well, that depends on the supervisor, their expectations, and what kind of knowledge we are talking about. My expectation is that your supervisor will recommend you some resources to study, *and/or* teach you the basics themselves, *and/or* ask somebody else in the lab (for example a postdoc) to teach you the basics. Maybe they also decide that you are after all not the right person to explore this new field with, and recommend you to change your research direction to something that's more aligned with your prior knowledge.
Independently of how they react, it's better than stumbling forward without knowing what you are doing. That's how failed PhDs start.
Upvotes: 4 <issue_comment>username_2: I once asked my supervisor "When will I know enough to stop having to look these things up?" and the simple response was "Never!". Everyone (even supervisors!) has to look things up, everyone has to revise things at some point. There is no shame in this. It might be that you ask your supervisor only to find that he/she promptly goes and looks it up.
Your supervisor will probably not think badly of you for some maths naivety. You lack knowledge, but you have enough self-awareness (and knowledge) to identify this fact. You now need to fill the gap in your knowledge. You can either do this by asking your supervisor, as you've suggested, or you can try to do it yourself first.
Don't worry about whether or not you **should** have already learned this. Maybe it was covered briefly but you didn't know you'd need it in future so you've forgotten it, maybe you didn't even know it existed in the first place. No one can blame you for that.
Figure out if you can learn it yourself. What do you already know? What resources do you have? What do you need to know to accomplish the task. Your supervisor is **one** resource but are they the best one for this purpose? It might be that they can point you towards some resources, but maybe you can find these yourself if you try. One way you *might* annoy your supervisor is by using them as your first resource for every single thing. Try to educate yourself first so that you know which questions to ask when/if you do ask your supervisor. Also, consider what kind of resources you need: do you need a personal one-on-one tutor or an online course or just a series of tutorial videos? Does your university offer sessions with a maths tutor or will it have to be your supervisor?
Upvotes: 3 <issue_comment>username_3: Just do it if you cannot figure it out yourself! I'm so much happier when this conversation happens as it firms something that needs to be firmed in the foundations for my student!
The worst thing is that something basic ends up continually being an issue and I don't know how to help the student and something lags for months. Nip it in the bud immediately and directly. That really makes projects and research actually speed up from this issue rather than slow down!
Upvotes: 2
|
2021/08/10
| 821
| 3,477
|
<issue_start>username_0: In the United States, I think it is required by law in several states to make salary information at universities public. In general, in most western universities, it may not be very hard to find salary information, particularly the publicly-funded ones.
Is there any benefit to universities by making this information public?
I am interested in the kind of "active" benefits which universities that don't publish salary information (and are not required by law to do so), *lose out on*. Further, if universities were not required by law to make this information public, would there be any benefit (to them) in continuing to do so?<issue_comment>username_1: In every instance of which I am aware, the publication of salaries in US universities comes from a combination of two factors:
1. Transparency laws that require publication of salaries of all state government employees, and
2. State universities where every employee is technically a state government employee.
Publishing salaries is nice for transparency, but often creates management headaches (e.g., resentment between people with similar experience but different pay levels). As such, I would expect that if publication of salaries was not required, then most universities would either stop doing so or would drastically limit the number of people to whom it applied (e.g., only to higher level administrators).
Upvotes: 5 [selected_answer]<issue_comment>username_2: >
> Is there any benefit to universities by making this information public?
>
>
>
Salary transparency benefits voters by informing them. They find out if politicians are using their tax dollars well. If public universities did not provide salary transparency, they *might* loose the support of voters, and subsequently their tax revenue.
Institutions not accountable to tax payers would often prefer that salary information not be public. The most wealthy universities might see public salary information as a hiring tool. Employees can disclose their salaries if they wish.
Upvotes: 3 <issue_comment>username_3: As others have said, universities and government agencies generally publish their salaries in order to comply with transparency laws. Nepotism, for example, is harder to get away with if all salaries are publicly posted. But from the university's perspective, this has the potential to:
* Generate bad press, if a reporter feels (rightly or wrongly) that an employee is overpaid, and
* Give employees more leverage during salary negotiations.
So I seriously doubt that most organization would release salary information if they didn't have to.
By the way, it may seem like universities' opposition to transparency laws are entirely selfish, but let me make one counter-example. If <NAME>'s salary is triple the market rate, it may seem self-evident to an outsider that the university is not "using their tax dollars well." But it may be that the university has already concluded that replacing <NAME> with three market-rate deans would be a net detriment to the university; <NAME> is "just that good." In this case, the public outrage at <NAME>'s salary would be entirely misplaced, and the university would be forced to choose between doing the right thing or doing what looks good. This [interesting Ted Talk](https://tedsummaries.com/2015/03/16/dan-pallotta-the-way-we-think-about-charity-is-dead-wrong/) gives a good discussion of this phenomenon as it applies to charities.
Upvotes: 2
|
2021/08/10
| 937
| 3,719
|
<issue_start>username_0: Currently, there are plenty of options to publish an open-access paper. From using purely open-access journals (free of charge or for a fee) to opting-in for an open-access license in the traditional journal ([usually for a fee](https://en.wikipedia.org/wiki/Hybrid_open-access_journal)). However, I am not sure about efforts/options to make already existing papers open-access by means of re-licensing or double-licensing.
Typical use case:
* A researcher has some papers published in a traditional journal which some time ago became hybrid open-access. Thus, new publications have an option to be published in an open-access fashion.
* The researcher has the funds (source is not important) and the will to make some of their already published contributions open-access.
* Currently, the best existing option to make the publication more accessible (not open-access!) is via [self-archiving](https://en.wikipedia.org/wiki/Self-archiving), depending on the policies of the particular journal\publisher. [This is a great option](https://academia.stackexchange.com/q/63961/56594), but certainly not an ideal one.
I am not aware of such options existing for any major journals\publishers that I have papers published in. Do they actually exist? Is there something that prevents publishers from offering *open-accessification of existing articles*? Are there more materials and established point-of-view on this topic that I was not able to find?<issue_comment>username_1: >
> Is there something that prevents publishers from offering
> open-accessification of existing articles?
>
>
>
No, there isn't. The current license for the published manuscript is the result of a contract between the authors and the publisher. This means creating a new/additional license is possible if both the authors and the publisher agree. (You need agreement from your co-authors!) Only revoking an open-access license is impossible.
Demand for relicensing is probably low and thus publishers don't mention it. However, I would expect them to comply if you ask them. The fee for Gold Open Access is basically additional revenue.
Upvotes: 4 [selected_answer]<issue_comment>username_2: As username_1 [points out](https://academia.stackexchange.com/a/173249/17254), licensing terms can, at least in principle, be renegotiated. I'm aware of one academic publisher that provides an option for this: Springer-Nature has (what appears to be a [trial run](https://www.springernature.com/gp/open-research/policies/journal-policies) for) a policy allowing [retrospective open access](https://support.springernature.com/en/support/solutions/articles/6000244492-order-retrospective-open-access) in certain case:
>
> From January 2021, authors who published primary research articles in
> Nature or the Nature-branded Research Journals can choose to make
> their articles open access (OA) retrospectively, however only primary
> articles submitted in 2021 will be eligible for the retrospective OA
> option.
>
>
> All non-primary research in these journals (e.g. Reviews, Comments,
> News & Views) is not eligible for Open Access under our current
> policies.
>
>
> After submitting the form below, you will receive an Open Access License to Publish (OA LtP) Form that needs to be signed and returned. If you order open access for your article, you will be charged an article-processing charge (APC). The invoice will be sent to you after the license to publish document has been returned.
>
>
> We will publish a correction article and update the metadata and full text of the original article, which means that copyright, license and open access information in the article will be updated.
>
>
>
Upvotes: 3
|
2021/08/10
| 4,043
| 17,078
|
<issue_start>username_0: I am currently reviewing a theoretical CS paper that is incredibly technical and mathematical. I’ve been reviewing the paper for longer than a year now, and I am finally recommending the paper to be accepted as I believe that it is a significant push in a certain problem in my subfield.
However, there are numerous instances in the paper where the researcher evaluates closed form formulas for integrals/summations that are incredibly difficult to do by hand (they involve extremely complex substitutions/transformations/algebraic manipulation/ and quite frankly, non “human-like” steps). Just as an experiment, I plugged in one of the integrals into maple and it gave me the exact same steps in their proof. I found three such occurences where the integrals/summations proofs are simply copied from maple.
I’m inclined to call out the researcher on this, but I have no evidence except what I’ve presented here. What makes me more frustrated is that I wouldn’t have changed my acceptance decision in the slightest had he simply quoted the integral/summation with a note that it was simply evaluated/verified in Maple. Should I just skip this from my review and focus on more important aspects of the paper?<issue_comment>username_1: As @Roland says, there's no harm in stating your observation. You can easily frame your suggestion so that there's no problem if you turn out to be wrong, e.g.
>
> The paper presents complex algebraic solutions without comment (steps x, y, and z) that were likely done with the aid of a computer algebra system (such as Maple). If this is indeed the case, the authors should cite the software used.
>
>
>
As @DaveLRenfro comments, you could also point out that *you* were unable to reproduce/verify the results by hand/without a CAS (but maybe the authors are just way better at integration than you are ...). If the authors want to double down and say "no, we did these by hand", that's their problem.
Upvotes: 5 <issue_comment>username_2: If the proofs are short enough one can verify them by hand, then I view using a computer algebra package similarly to using a calculator or dictionary. I think citing is optional.
On the other hand, if someone says "routine use of the trig formulas and standard calculus" will solve this equation, and it turns about to be 200 pages of computer generated formulas are needed, then the source code should be available and the computer algebra package cited.
You are free to suggest the paper would be better if it mentioned the use of a computer package. I do not think "calling them out" is appropriate.
Also, if there are more natural proofs, then the authors should be encouraged to spend a little time looking for them. There are lots of proofs that are generated by humans that are awkward, unnatural and hard to follow. It is not always worth it to spend a month to find a nicer proof.
Upvotes: 6 <issue_comment>username_3: Maybe it’s a generational thing, and maybe my mind is playing tricks on me, but I get the impression that young people today are very quick to “call out” other people for actions that they perceive as moral transgressions of various sorts, where in fact those actions range a wide spectrum from actually ethically problematic behavior, to mild sloppiness, ignorance or negligence, to “offenses” that exist strictly in the imagination of the person doing the calling out.
This can get a bit tiresome.
My suggestion is that you try to adopt the habit of assuming the best about people’s intentions until they leave you no choice. If they do X, don’t immediately jump to the worst possible explanation as to what their motivation was for for doing it. And don’t immediately assume that everyone has (or should have) the same beliefs or moral code as you about every single thing, including such head-scratching, esoteric questions such as whether Maple should be cited for the steps of evaluating an integral. (**Edit:** this paragraph is addressing some things said by OP in comments which were later deleted, so it may sound off the mark to some. I’m leaving it here to preserve some of the context of my original answer.)
Is it appropriate to point out that it might be advisable to cite Maple for those calculations, if indeed the authors of the paper used Maple? Yes, sure, that certainly sounds like a way of making yourself helpful as a reviewer. But “calling out”? No, I think that phrase and action are sadly overused these days, and should be reserved for much more problematic circumstances.
**Edit:** I gave some thought to the question of whether it’s actually necessary to cite Maple for the calculations you mentioned. The thing to keep in mind is that there is a well-developed theory of symbolic integration, with standard algorithms that are implemented in the major CAS packages. I don’t know enough about Maple to know whether it also has some additional, especially novel, proprietary algorithms that make it possible for it to evaluate integrals that other packages can’t. But in the current case that doesn’t sound so relevant; the main point is that these days there is no great honor to be had among mathematicians for the “discovery” of how to evaluate a given integral, assuming the steps do not involve some highly non-standard transformation or technique. These days if someone says “I evaluated this integral, here are the steps”, no one will care very much how they found the steps, since it is assumed that they relied on known techniques (and software tools implementing those techniques) unless that’s clearly not the case. The situation is very much analogous to writing out the numerical value of the square root of some number without explaining what brand of calculator you used. No one thinks that calculators deserve special mention for knowing how to calculate a square root.
The bottom line is, if the Maple integral evaluations used standard techniques that are widely known in symbolic integration circles, I don’t think the researcher has a special duty to cite the software. It may be nice of them to do it, and it may even make their paper a bit better if they did, but from an ethical point of view, I don’t see the lack of a citation as a major issue.
Upvotes: 7 <issue_comment>username_4: It seems like you are treating this as a homework assignment where the instructor said "all work must be done by hand", but then someone went and did all of the work with a calculator. While doing the work with a calculator would be wrong in that instance, here I don't see what the issue is.
Coming from a physics/mathematical modeling background, I have seen countless papers write out derivations of equations where huge steps are left out or only briefly described. Never once have I ever seen a paper say "we did all of this work by hand", or, "we used Maple, Mathematica, etc. to make sure we did our math correctly". You just assume the math was done correctly, and you don't really care about how it was explicitly done.
The only reason I can see this mattering is if the method of "evaluating closed form formulas for integrals/summations" was the whole point of the paper, but the results were actually obtained a different way. e.g. the authors claim to use their new method/algorithm, but really they just plugged things into Maple, then that would definitely be dishonest. But if the equations are just being used for other things and the derivation is not the point of the paper, then I honestly don't see how they got the derivation done matters at all.
So I will agree with others; there is no need to call anyone out. From what I can tell nothing bad is going on. The most I would do is just suggest giving credit to any software that was used.
Upvotes: 4 <issue_comment>username_5: I would say it really depends on the circumstances.
I am one of the developers of Maxima, and I also routinely use Maxima in calculations employed in physics papers. Maple, too, and occasionally, Mathematica.
Computer algebra can be used for trivial things. By way of an admittedly trivial example, I may run into a trigonometric expression like `4*sin(x)^3-3 sin(x)`, which looks vaguely familiar, so without giving it a second thought, I'd feed it to Maxima, `radcan(trigreduce(4*sin(x)^3-3*sin(x)))`, and presto, I get `-sin(3*x)` which goes into my paper. Would I mention that I used Maxima for this? No more than I'd mention that I used a calculator to calculate, say, 10/7 = 1.4286. Same goes for relatively straightforward integrals and other simplifications, even if the intermediate results are difficult for humans to keep track of.
The one potential issue with this is that computer algebra systems are programmed by humans (like yours truly) and may not be free of bugs. But that is kind of a user beware thing; it is best to check critical results by recalculating them in more ways than one, perhaps using more than one computer algebra system. In the end, it's no different from making a mistake when doing calculations on paper: always double check your critical results!
In short, in these situations I would not cite the use of computer algebra or expect, as a reviewer, an author to cite its use.
The situation is very different when the use of the computer algebra system is more elaborate. When it's not a single command line but a lengthy, elaborate program in that computer algebra system, designed to set up, solve, and simplify a problem. In that case, yes, I'd very much expect not so much a cited reference to Maple (or whichever CAS is used) but perhaps the actual computer algebra code included in the paper as an appendix with a brief explanation or, if it is too lengthy to be included, then deposited in a public archive and referenced. Otherwise there is no reproducibility: how do we even know that the program (perhaps written by a researcher with limited programming experience as his expertise lies elsewhere) is even correct?
From the wording of the question, however, I get the impression that it is a case of the first variety: the question itself suggests that once the reviewer recognized that the solution was likely obtained using Maple, he was able to obtain the same result with ease.
So no, the simple fact that a result was copied from Maple is not a reason to cite Maple, just as copying a result from the display of a calculator is not a reason to mention that calculator.
Upvotes: 5 <issue_comment>username_6: Common mathematic techniques are not special because of who carries them out. If the steps are valid, that's all that matters. It looks like those are all mechanical transformations that by themselves don't make the paper special. It's just usual mathematical drudgery that has to be done correctly, but otherwise is of no importance to the novelty of approach. That Maple or some other CAS actually "did it" doesn't change anything whatsoever. The steps in a proof stand on their own.
Now, the CAS may be implementing some special algorithm for a tricky class of integrals etc., that may be unique, new enough and citeable, but 1) the help pages for the functions used would probably document that if indeed the algorithm was special enough to warrant that, and 2) figuring out that any particular algorithm actually got used requires way more investigation than is warranted in such a paper - CAS systems won't normally tell you what algorithms they used (here: "algorithms" = "citations to where said algorithms got published first"). You'd have to ask a specialist in CAS algorithms to take a look at what was done by the CAS, and whether there's anything there that's cool/interesting enough that a citation would be warranted. Otherwise you may as well be citing dozens of foundational CAS algorithms. So that's not really feasible.
Citing the CAS as a whole is like "citing" someone who checked your computations. You can thank or acknowledge them - sure, but you won't be citing it.
I think that the idea of calling someone out for using a CAS the way others use a calculator is mostly preposterous because it is absolutely unproductive. At the end of the day, nobody gets any value out of it. The reason for citations is to let others learn more about what the paper is built on, and to give due credit. When it comes to using common technical software, credit is usually given in hard currency... and that's all that's needed unless the software is specialized. CAS isn't really. Entirely mechanical, "garden variety" tools like CAS are not exactly a dime a dozen, but how would citing them help, and moreover how would "calling someone out" over it help anyone? It just sours things up. I fully expect that anyone wanting to reproduce or fully grok the results in the paper will have to know enough that knowing how to use a CAS, and which CAS may be sufficient for the task, will be the least of anyone's concerns.
If the paper built some derivation specifically on the "extraneous" aspects of the CAS system, e.g. if its approach integrated some aspects of, say, Mathematica's language and idioms - that'd be different. But here, that wasn't the case. It doesn't look like the paper's discoveries used scripts/programs written in Mathematica/Maple, leveraging the key aspects of the environment. They just did the integrals and such using Maple. Maple itself doesn't matter here it all.
Upvotes: 3 <issue_comment>username_7: One of the frustrations of academia is that as a reviewer, you often invest large mental and creative effort and time as an anonymous referee to verify and even make understandable important results, for no credit or acknowledgement. The metaphors "panning for gold" and even "pearls before swine" apply.
It's therefore very easy to get frustrated and grumpy at the *n*th instance of the authors making it unnecessarily hard for you and future readers.
From the discussions in comments on the question and other answers, it sounds like once the frustration has dissipated, a comment in your review along the lines of the following might suffice:
>
> Lemma 3.21 [or whatever] is very technical and hard to follow. I was able to verify the proof using a CAS (Maple), which precisely generated equations [...] through [...], leading me to believe this proof was generated with CAS in the first place. It would be easier for the reader to follow -- and provide a more realistic impression of the specific contribution brought by this paper -- to note a CAS was used, and perhaps cite the specific CAS/version.
>
>
>
Upvotes: 3 <issue_comment>username_8: Each person's use of that famous Canadian1 mathematics software named after a tree2 requires agreeing to the company's EULA. While not all provisions of all EULAs are enforceable in all jurisdictions, the text of the EULA does communicate intent and wishes of [REDACTED TREE]Soft3.
[In section 8, GENERAL LICENSE RESTRICTIONS](https://de.maplesoft.com/documentation_center/Maplesoft_EULA.pdf):
>
> The License of all Software hereunder is subject to the express restrictions set forth below in addition to the restrictions
> imposed by the applicable License Option, Installation Type and Order Confirmation. Without the express written
> permission of [REDACTED TREE]Soft, YOU shall not, and shall not permit any Third Party to:
>
>
>
...
>
> (h) use [REDACTED TREE]Soft’s name, trade names, logos, or other trademarks of [REDACTED TREE]Soft, or any of its Affiliates or
> Licensors, whether in written, electronic, or other form, without [REDACTED TREE]Soft’s prior written consent;
>
>
>
So under the rules of [REDACTED TREE]Soft's EULA, nobody using their software should ever mention the name of the software (that famous Canadian mathematics software named after a tree, which has a trademarked name) or the company name without prior written consent of [REDACTED TREE]Soft.
While I am not one to determine if that contract term is enforceable where you live, it does make it really clear that [REDACTED TREE]Soft doesn't want you to use their name. Doing so in a publication would be against their clear request in their EULA, unless you first cleared the use with [REDACTED TREE]Soft itself.
Note that the contents of your workbook:
>
> (a) reproduce, transmit, modify, adapt, translate or create any derivative work of, any part of the Software, in
> whole or in part, except for any content developed by users in Maple Worksheets/Documents that are not
> part of an electronic book Software product or as otherwise expressly permitted in this Agreement;
>
>
>
are explicitly permitted to be shared without the consent of [REDACTED TREE]Soft.
At a moral level, [REDACTED TREE]Soft clearly does not want its name mentioned. So users of that famous Canadian mathematics software named after a tree should respect its wishes and not mention its name.
Unless, of course, you have prior written permission.
It is just being polite.
---
1
2 Not an oak.
3 All uses of [REDACTED TREE]Soft in the quote below are redacted by me, not in the original document. The tree is also not a beech.
Upvotes: 2
|
2021/08/10
| 351
| 1,568
|
<issue_start>username_0: I have conducted two hands-on training programs. I cannot decide whether or not I should mention this training experience in my SOP. I have quite a few publications, some conference papers, and 2 years of research experience. Will the addition of conducting the training programs as a "teaching" experience give my SOP more weight or should I avoid it?
The training courses were 1 month long each. Please provide any suggestions. Thanks in advance.<issue_comment>username_1: I think you should base sharing this information on whether or not this information would be useful and showcase you as a strong candidate in what program you're seeking admittance. For example, if teaching is not a major or strong component of this program, you might not share it. If sharing this information will make you a strong candidate, then you should.
Upvotes: 1 [selected_answer]<issue_comment>username_2: Don't confuse your CV with your SoP. The statement of purpose should be about the future. What are your goals and how do you propose to achieve them? To some extent, you can mention past work, but only if it points to likely success in meeting those goals. I doubt this will help support your work as a researcher, which is the immediate short term goal.
I assume you don't want to run training classes as a career, so the information, which might be listed *somewhere* is better left out of the SoP so that you can stress more relevant things.
---
In the CV you might have a section for "service" or for "teaching related activities".
Upvotes: 2
|
2021/08/10
| 625
| 2,660
|
<issue_start>username_0: I have just completed my bachelor, and wish to pursue a masters in the field of robotics. However, during my bachelors, I was not able to build a good research profile and do not have any publications in my name.
Therefore, I would like to join a project group in a university and work under the guidance of a professor to build my research profile.
I have a two-part question.
1. Is such a thing even possible.
2. Is yes, how to approach professors for the same.
I understand this might not be the best place to ask, but since everyone in this community is related to academia in some capacity, your guidance would really mean a lot<issue_comment>username_1: What you are asking for is very difficult to achieve unless you bring some special expertise or talent that the professor would find valuable in their work. You are, in essence, I fear, asking for an education and use of the professor's time and knowledge without actually being part of their institution. It is all cost and little to no benefit for the professor.
Professors, on the other hand, might be interested in collaborations among people, even students or independents, as long as there is some common skill-set and understanding of the problems to be solved. Your status as independent has no bearing in this case. But what you know and what you can contribute is very important.
Some professors, if they hold a research grant in robotics might be able to hire technicians to help in the work but they need skills and probably won't learn too much on the "research side".
So, it is difficult to encourage you.
However, it might be possible to take a course or two as a non-matriculated student prior to applying for a masters. And, in some places, such as the US, research skills aren't normally required for entry into graduate programs. Building a "good research profile" is difficult in many places.
And, if you have the skills and background to convince a professor to work with you on research, you probably have the skills and background for successful admission to a masters (or even a doctorate in the US).
Upvotes: 2 <issue_comment>username_2: This is possible and called “volunteer research assistant”. Write to professors and ask to get research work experience. You might even get paid.
Keep emails short and send as many as you can. Don’t worry if you don’t get response, professors might miss your letter. When writing, mention specific projects or papers by that professor.
I am currently working with one undergrad and one 1st year PhD student. Previously I've worked with high-school students and undergrads.
Upvotes: 3 [selected_answer]
|
2021/08/10
| 1,052
| 4,596
|
<issue_start>username_0: I am a bachelor student and have been working for a professor since the last two years on a paper for which I was paid. All the work is mine but data belongs to him. We recently had a fallout and I no longer work for him. But the paper is completed and only submission part is left and at this stage he has stopped communicating with me. I have tried writing to him more than three times and there has been no answer. Only I have worked on the paper and it's a method paper.
Can I publish without him? Can I put his name and send out for publication even if he does not agree ( or respond to mails)?
Or can I take method paper to a different lab working in the same area and then publish using their data?
What can I do now?
Edit: I don't think I am an over confident bachelor student trying to aim for nature or a Nobel. I verified with multiple people and all of them has stated something along this line "that though the work won't make it to any top tier journal still has more than enough new contributions for a paper".
The problem is that me and my supervisor ended up having a fallout over some slight remark he made about me, which I felt offensive and was not diplomatic in calling that out in a skype call with other lab members. Since then he has terminated by contract and won't reply to my mails.
My professor works in Economics (data) and the paper is around Bayesian approaches on this data (Stats, AI).Other than the data, there has not been any intellectual contribution from my professors side.<issue_comment>username_1: >
> Can I publish without him? Can I put his name and send out for publication even if he does not agree ( or respond to mails)?
>
>
>
This won't work. Authors bear a certain responsibility regarding the quality of the paper. So you can't publish without consent of all authors.
But note that your statement "All the work is mine but data belongs to him." does not imply that you can simply publish your findings without him. Your professor is likely to have made intellectual contributions such as defining the actual research question, putting you on the right track towards a solution, etc. And substantial intellectual contributions typically imply authorship.
Your best bet is to find a colleague of his who can mediate between you two. If the paper is actually completed, considerable resources have been spent on the side of the professor (time and money), so there is a chance that you can get this published with help.
Taking the paper to a different lab may work if (a) the professor is ready to give up authorship, which sounds unlikely from your question, and (b) the head of the other lab is willing to speak to your current/previous professor about the paper.
Upvotes: 3 <issue_comment>username_2: While there are instances of tyranny on the part of supervisors, there are also examples of overconfident students. Thus, the first step must be to seek some sort of mediation, or at least some outside opinion on the situation. This means finding someone willing to talk to you and your supervisor, hear both sides of the story and hopefully propose a way forward.
If agreement cannot be reached, then you are in a pickle. It would be incorrect on the part of anyone to publish a method paper without your supervisor as a co-author (as you implicitly acknowledge yourself). Moreover, “shopping” your method is unlikely to succeed: the cold reality is that serious experimental groups are unlikely to listen to outsiders - they have their own trusted tools and methods - much less an undergraduate involved in a dispute with a professor.
Upvotes: 1 <issue_comment>username_3: Being a student, we always think our result is best. We always want some publication from work done during the Bachelor degree. We want to submit it to a top tier conference, but it won't be the case with your professor.
The problem is that from the professor point of view, either the paper is not ready or not a top priority thing from him.
So, in the given scenario, you can politely ask your professor to allow him to publish the paper and check the manuscript. The benefit would be that paper's quality will improve, chances of getting accepted and regsitration fees will be covered.
In another scenario, if you don't get a reply from him after sending 2-3 reminders, inform him that you will submit it XXX conference. The side effect would be that you have to pay the registration fees from your pocket.
At last, if your work in a university lab, data never belong to you. University has right on that data.
Upvotes: 1
|
2021/08/11
| 377
| 1,584
|
<issue_start>username_0: A professor had agreed to an internship and asked me to remind her again before the start of August.
She has not responded to the reminder email I sent 10 days ago. Should I mail her again? Is it impolite to call her in her office?
I was very eagerly looking forward to work with her, so yes, I’m slightly desperate.<issue_comment>username_1: It is very likely that she just overlooked the mail. You definitely should send her a mail again. It's totally reasonable to keep asking until you get the response, even if she suddenly changed her mind about hiring you.
BTW, it's summer now. Many professors would go on vacation at this moment. If this is the case, I am afraid you have to wait until she gets back to work.
Upvotes: 2 <issue_comment>username_2: Sorry that this comes so late, but others may benefit.
In such situations it is advisable to contact the professor's department, usually via a public phone number or email address. A staff member in the department might be able to put you in contact if you explain the situation. Or, they might be able to explain to you why communication is impossible at the moment.
Don't make it a complaint against the professor, just an inquiry about how best to communicate. The staff member might have a lot of options for such things, including contacting the professor on your behalf or asking the department head for advice.
But this is for time-sensitive inquiries, such as the one in the question here. If waiting is reasonable, then wait until it is closer to deadlines that can't be missed.
Upvotes: 0
|
2021/08/11
| 453
| 1,971
|
<issue_start>username_0: I am going to apply for PhD in Cancer Cell Biology at West Virginia University. But before starting my application, I have come across a confusing problem. At WVU, the PhD in Cancer Cell Biology is offered under the umbrella of the [Biomedical Sciences Graduate Program along with other six departments](https://www.hsc.wvu.edu/resoff/graduate-education/phd-programs/biomedical-sciences/).
So, in my Statement of Purpose, which one of the following statements should be written:
1. I am applying to the PhD program in Cancer Cell Biology offered by the Biomedical Sciences Graduate Program at West Virginia University.............,
OR,
2. I am applying to the PhD program in Biomedical Sciences at West Virginia University.............
Need some valuable suggestions. Thanks in advance.<issue_comment>username_1: Either will do. The second is shorter, so the reader gets to the important content sooner.
Upvotes: 2 <issue_comment>username_2: Your best resource for all such questions is an internet search. Find the website of the uni you are applying to and find their admissions method page. They will often have a variety of useful resources. They might even have on-line forms you can copy, or even example application letters.
Another possible thing is, search out the admin assistant to the head of grad studies in the place you want to attend. This person will be handling most of the correspondence for the Great Person, who is unlikely to be reading their own mail unless it is put in front of them. The admin assistant will be able to answer questions, and will be used to getting a lot of nervous questions from applying students who don't know such details. They will be quite patient with such things, especially in cases where people may be applying from other cultures or other languages, so mundane details will be confusing.
Be very nice to all admin staff in the uni. They can save your life.
Upvotes: 3 [selected_answer]
|
2021/08/11
| 7,913
| 34,314
|
<issue_start>username_0: I will speak from my experience as an integrated-Masters graduate in Physics in UK, but I think this is applicable to any discipline.
Universities got it right when it comes to laboratory experiments or research projects during the degree: they make students work in **pairs**. I loved my final year research project because being able to brainstorm and bounce ideas back-and-forth with my MPhys partner took away most of the daunting nature of research. Having someone right next to you working on exactly the same project can prevent students from feeling deeply disoriented and aimless. You support each other when you're feeling lost, and given you're both on the same boat, it takes away the natural response of procrastinating and losing motivation that comes with being alone in such situations. Thinking about things together as loosely as needed can give you a natural sense of motivation somehow and allows you to be a lot more productive and effective.
* More **productive** because together you are able to come up with a lot more ideas, and you motivate each other to keep thinking for longer periods of time without giving up.
* More **effective** because the ideas coming from you and from your partner build on top of each other, and gain strength to penetrate into the unknown. When you are on your own, it's more likely that your ideas will stack up at the bottom, not having the same penetration potential.
A [study](https://www.theguardian.com/higher-education-network/blog/2014/may/08/work-pressure-fuels-academic-mental-illness-guardian-study-health) in the UK reported that 46% of researchers feel lonely at work, which increases to 64% for PhD students. I believe the loneliness of independent research is the real root of the problem with Academia, the reason behind the mental-health isues such as impostor syndrome, anxiety and depression.
I looked up on Google 'Why don't PhD students work in pairs' and there are no results at all. Why have we normalised that research should be done alone? Yes, you have your supervisor to discuss things with maybe more than once a week if you are lucky, and yes, you have other PhD students around you to socialise and discuss ideas with, but this is not the type of loneliness I am referring to. It is the loneliness of independent research, where you get to your 3rd, 4th year of PhD, and being so advanced in your project means **nobody understands the nits-and-grits of it like you do** and hence no one can really help you anymore.
---
Now, I can think of a couple of issues with pairing PhDs up:
1. **Money**. Universities don't want to pay twice the money for a single project. Say you have the same number of PhD students, but they're all paired up. The pyramidal scheme of Academia wouldn't want to halve the number of projects for the same amount of money. That is less publication potential so *nah*. They are increasingly putting up more Mental Health support services, but I think they are not tackling the root of the cause, just the symptoms.
However, the fact that putting students in pairs could make them a lot happier, more productive and effective, means this could be turned around and it could actually end up being benefitial for the research output.
2. How do you **match** pairs up? Pairings students up randomly could go wrong if they don't end up getting along well. And some students might even prefer just working on their PhD on their own.
Just **make it a normal part of a PhD offer to let the student decide if they want to undertake their PhD alone or with a partner**. And put some formal procedure in place in case a pair of students really did not manage to get along. The procedure could just involve splitting the directions of research of the pair so they can carry on alone, or join someone else's project (as long as it's sufficiently related) who might have been split from their previous partner but would still want to work in pairs.
3. PhD is preparation for **independent research**. PhDs who manage to finish the programme on their own show they are prepared to be successfull in climbing up in Academia, in case they choose to apply for a Postdoctoral research assistant job.
If PhDs could hugely benefit from working in pairs, the same idea could be extended to Postdocs. I understand once you get to positions higher up in Academia, especially after tenure, one gets so many responsibilities (meetings, lectures, tutorials, supervisions, etc) that having a *pair* to do your research with becomes unfeasable. But still, given you have a fixed office in a department, nothing prevents you from having a collaborator who you work with as closely as you want.
---
Are there any other reasons I have missed which makes this unfeasable? Am I think about it wrong? If University knows pairing students up during their undergraduate is benefitial, why discontinue that afterwards?<issue_comment>username_1: This is an interesting thought experiment.
I think your last point is the main issue - a PhD is a training course in becoming an independent researcher. Sure, you could move the point at which you become independent up a few notches, but that just means you'll have to learn things you would otherwise have learned as a PhD student later.
You compare this to a final year project in an integrated Master's course (effectively an extended Bachelor's degree and taught in a similar way). That's fundamentally different from the way that a PhD or research Master's works - you aren't being taught about existing research by replicating it in controlled conditions, you're *conducting wholly new research*. That involves demonstrating that you - yes, *you*, not your research partner - can come up with, conduct and defend well-conducted research. How would you disentangle each partner's contributions to a thesis?
As you identify, money is important too - when you get to postdoc level and above you're spending large sums on doubling up each post. Staff costs are (at least in my field) the vast majority of grant costs so you'd have to drastically increase your grant income (this is more the problem than a university being cheap - you have to show that this is value for money for a funder).
Upvotes: 3 <issue_comment>username_2: This is from the standpoint of someone who has extensive experience and writings with "Agile Processes" including pairing in both computing and management. I've supervised a few doctoral students and would have been open to considering a "joint" project by a pair of students. I've thought about the issue, but it never actually happened. I've also done quite a lot of "pair teaching" in graduate programs.
However, *pairing* is a step beyond collaboration. In particular, it doesn't mean dividing up a problem into parts on which people work independently, even if they try to bring it together at the end. That "dividing" happens now in large scientific labs where an overall goal can be attacked with smaller, independent, goals, and doctoral students taking responsibility for one of them. The [answer of username_4](https://academia.stackexchange.com/a/173293/75368) explains this collaboration process very well.
In pair teaching, for example, both professors are present for all lectures and participate in all teaching activities. One of us would have the "floor" at a time while the other observed and made comments as necessary, perhaps also responding to some student questions independent of the "lecturer" of the day - though we didn't actually do a lot of what you can call "lecturing". But that is another matter. In any case, collaboration (team teaching) is much easier to arrange than pairing.
* Why it doesn't happen
One reason that it doesn't happen is that there is no tradition for it and, hence, some fear of the unknown. There is a rational basis for that, actually. As an advisor, I take on some responsibility for the future career of my students and I don't really have evidence that if I stray too far from traditional practice that it won't reflect badly on the students. I can't put them at risk.
Moreover, the two students in a pair are unlikely to be hired together to continue their work. I think that would be so rare as to be impossible to measure.
First, for it to truly be called "pairing" both students would need to be actively engaged in all practices all the time. Dividing up a project in to two projects isn't pairing, though it can be collaboration. I'll stick with that more strict concept. A "dissertation" with two authors would need to pass a quite strict filter. First, the university would need to agree that it is acceptable, and award two degrees for one dissertation. More important, there would need to be some evidence, hard to obtain, that the participants don't suffer in the academic marketplace.
I think the above reasons are far more important obstacles than money or the "matching" problem. The latter is trivial. Permit it only when two students request it themselves. If they come to me with a proposal, then I'll consider it (with suitable caveats and warnings, of course).
* How it might happen
If two students came to me (past tense, as I'm retired), with a proposal for a paired project toward their dissertation, I would consider both the students and the nature of the project. This wouldn't have been a big surprise, since I integrated this sort of thing into teaching and much of their other doctoral studies was done in a team environment. They were used to working with one another and sharing things.
I wouldn't worry too much about "equal work" since the environment let me keep a close eye on what was going on, and peer evaluation was used as appropriate (not peer grading). We had very good 24/7 communication facilities.
But the nature of the project would be key in my view. It isn't that the project would have "parts" but that it would have "aspects" that might develop in to distinct dissertations based on their paired work. Sorry, but I don't have a good explanation of "aspects" since I don't have an example in mind. But technical vs managerial aspects might have been able to pass the test. Or, perhaps theoretical vs applied.
And note, that I'd still be wanting to see two dissertations, probably cross cited in many places, but enough independence that the university doesn't need to be involved and potential employers wouldn't worry about hiring either of the students.
All of this suggests that it might be *possible* to manage this, but I think it would be rare in the short term. If some evidence of success would arise from it, then it might expand.
Upvotes: 3 <issue_comment>username_3: In the labs I have worked in, graduate students discuss their projects with everyone else in the lab: other grad students, post docs, technicians and other specialist staff, undergraduate students, their advisor, other collaborating professors and people from their labs. They do this at every stage of their project: not just years 1 and 2.
The lab I am in right now currently has just one graduate student; she works closely with our lab manager who has over a decade of experience doing the sorts of experiments required for her thesis project, and she co-supervises three undergraduate students. She also has a co-supervisor who has two other graduate students and the three of them work in tandem, though they all have their own personal niches and projects. If you had her working in a "pair" instead of this whole research community, she'd be *losing* a ton of interactions with others, not gaining any.
Not all labs are like this of course, and I'm sure things vary a bit by field. Biology is a pretty communal research field compared to, say, pure mathematics. Working collaboratively isn't for everyone, and causes all sorts of problems: who does what, who gets the "credit", who makes decisions when there is disagreement. That isn't to say that collaboration isn't a good thing, but it seems like projects work best when there is someone who has ultimate responsibility for that project. In industry you will find people with a [project manager](https://en.wikipedia.org/wiki/Project_manager) role; in academia, there are no such project managers, so everyone needs to manage their own project to some extent. It's a skill that will be required eventually if a student is going to stay on the academic track; I don't see why they should wait until later to learn it rather than to start as a PhD student.
Upvotes: 5 <issue_comment>username_4: This is quite field specific. In biology it is very common for research to be a team undertaking. For example, the new project we are just starting has a team of 2 postdocs and 2 PhD students. Now, in general this isn't complete pairing, in that a given experiment will probably only be done by one person, and certainly one person at a time, but if things go well, this team will discuss things on at least a daily basis, if not more frequently.
Indeed, such is strength of this team attitude that one of the main roles of a thesis committee is to make sure a student has sufficient material they can call theirs that they can write a thesis that is based on their own work - in the end, in order to certify someone is a capable researcher, you need to know what they did, and what others did, and what were their ideas, and what someone elses. One cannot keep working with a pair forever - there is a reason that companies only have one CEO, or even one team leader.
There is also the credit issue - if things are done in a team, who comes first on the authorship list (as even where contributions are marked as "equal" or "co-first", people often distinguish between first-first and second-first).
I had assumed that this was also the case in physics, where paper authorships can run into the hundreds.
One distorted version of this is when two people put on the same project end up competing, rather than collaborating. This could be due to their personalities, but there are also stories of supervisors setting two people the same problem and saying "the first to solve it get the publication/grant". Giving people related projects, or specified parts of a whole, avoids this.
I have to say that far from loneliness, the team spirit and "lab families" I have encountered in science is one of the things I find most appealing about the job.
Upvotes: 2 <issue_comment>username_5: One particular issue of the proposal is with the matching up part. It can be hard enough to recruit one capable PhD student. Finding two capable PhD students with similar backgrounds and interests that can work together sounds like a nightmare. Sure you might succeed sometimes, but doing so consistently over all PhD hires sounds near impossible.
And even you have managed to match a pair at start of their PhD, that does not mean it will work out for 4 years. Since PhD projects are pursuing novel research, they evolve overtime. A large part of this evolution is based on the candidates interests and strengths (which should also be evolving over time). Since people are different, the evolution for a pair of PhD candidates is bound to diverge leading to problems. (Or more likely the evolution is going to be dominated be the needs of one of the pair, impairing the potential for growth in their partner).
This leads me to the interpersonal problems this model is likely to cause. Four years is a long time for two people to put up with each other (many marriages don't manage to last this long!). If you are essentially binding two people at the hip, many pairings will develop issues overtime. You will probably end up have to send many to couple's counseling.
One particular issue that might come up: What happens if one of a pair decides they want to quit after a year? Is their partner going to be left as a single? Is this person going to be guilted into staying in a position in which they are unhappy?
In short, I am not convinced that this proposal would actually lead to happier, more productive PhD students in general. It might work for some, but I will great misery for others. This is not say that there are not a lot of bad practices in supervising PhD students. I'm just not convinced that this proposal would be a solution.
Upvotes: 5 <issue_comment>username_6: While it might be viable as an option, if you can figure out how to deal with problems such as one member doing the lion's share of the work, it wouldn't work for everyone.
First, and fairly obviously, is your university, department, and research lab large enough that there are likely to be two people at the same level who want to work on the same idea?
Second, perhaps not so obviously, is that not all people are alike. Even the statistics you quote show that: there are plenty of people who DON'T experience those symptoms of loneliness, anxiety, & depression, or at least aren't bothered by them. By the same token, there are people who very much dislike working closely with others. Force those people to do so, and odds are they'll soon be the ones reporting psychological problems.
Upvotes: 2 <issue_comment>username_7: There is another aspect to this: Lack of a functioning research group...
As a result you find a topic "dumped" on you as some funding was available and you get to pick up the pieces trying to figure out what to do... Then once you finish your PhD, you leave and with you all the knowledge you had. The person who comes along a few years later once again has to start from scratch...
I think pairing up people can also be difficult with respect to making sure that your PhD is your work as well as with respect to "ownership" on publications, even more so given how some people seem to be most intent to just profit for themselves rather than truly share/collaborate...
However this can easily be resolved by having multiple people work on different aspects of the same niche as then these people can bounce ideas off each other while not being in direct competition and in fact benefiting even more as they bring different backgrounds and thus ideas to the topic.
(This is possibly more a comment than a real answer, but it is another, let me say experience...)
Upvotes: 2 <issue_comment>username_8: In addition to what others have said about collaboration within the broader research group being more important:
* Ad-hoc pairing on a sub-project can be really useful and get results. I've got several co-authorships that way, including my most highly cited paper, and another that actually involved *3* postgrads from 2 groups working rather closely together.
* Forming these ad-hoc pairings (whether with other postgrads or postdocs) is a skill to learn in itself.
* Two postgrads, even if they can be recruited at the same time, are rarely working on such similar material that they can both work as a pair and get sufficient results for two PhD theses.
* Funding: even with a plentiful supply of willing students, getting PhD positions funded is hard to very hard for the PI/head of group.
This is largely from a physical sciences point of view, but informed by conversations I've had with other early career researchers across fields
Upvotes: 2 <issue_comment>username_9: I don't think that you get what a PhD really is - or at least what a PhD *should* be.
It is the consummation of independent, original and substantive research work that significantly adds to existing knowledge in that field and is deemed worthy of publication. It is also a qualification (though hopefully not the only one acquired) to teach in the discipline involved at university level.
Were universities to allow pairings to do a PhD programme - albeit where each individual looked at somewhat different aspects of a research topic - then how could examiners (or still less employers) fairly choose between those with paired doctorates and those with individual ones ? It's an everyday observation that operational compatibility between partnered workers is worth far more than the sum of their individual capabilities. A luckily paired individual could vastly overachieve and end up appointed to a position far beyond his/her individual capabilities.
Universities have always been individualistic research environments. This may not suit some of us who may like a real sense of combined effort in our workplace rather than the more abstract (and yes, at times, lonely) idea of contributing an individual effort to some indeterminate final whole. It may not suit efficiency maniacs in the Department of Education. But that is the nature of learning - we can really only see things coherently when looking at them our own way or from what has evolved from looking at them our own way. Cogging insights from other perspectives and jamming them in amongst our own will never lead to the clear coherent understanding needed to teach a class. It's not for nothing that a doctoral programme graduate is deemed a Doctor of Philosophy, rather than Doctor of Arts, Doctor of Science, etc. A programme forcing us to dig deep into our own thinking on any topic, questioning it quite often, squaring it with our experiments and those of others, arguing our hypotheses at seminars and walking the lonely walk between work and home every day - all this cannot but deepen our personal philosophy.
As things stand one could sometimes say that 'pairings' of a sort exist already unofficially: many PhD students just do the experiments suggested by their supervisor; many pick the brains of other students/staff and shamelessly use these ideas as their own; and many are inclined to misuse others to prioritise their claim on limited resources/equipment time - or even in getting personal attention. But this is down to lax attention to people management and poor personal example by lead academics.
The socio-professional benefits of pairing you mention - PhD candidates occasionally using each other as sounding boards, helping younger research students with techniques, analysis, etc - these are all things that any decent research group should foster. Departments where academics themselves cooperate tend to produce research groups like this; those where they do not will not have it. I think the happy medium is where the latter cooperation is freely given but every PhD student accepts his/her own obligation to do their own individual investigations. Yet please bear in mind that no one in any job can expect **personal support** from those around them at work: that's our own business and our own responsibility to find outside of work.
Your assertion of faster progress with research by pairings seems to me more like faster development/rationalisation of existing ideas rather than faster progress towards new fundamental insights. If a pairing does produce a fundamentally new insight, I'd be inclined to see it as the work of one mind in the pairing aided by the human support of the other one.
Upvotes: 2 <issue_comment>username_10: I think there is no proper reason and the scientific community loses many opportunities by sticking to this concept of letting PhD students mainly do their research on their own.
Of course, there is no doubt, that working in a team can be hard and challenging. And sometimes you have to make compromises and even admit that you were wrong and your collaborator had just better ideas.
But for the personal development, this - in my opinion - can only be beneficial as the PhD students learn how to collaborate, how to support each other and how you can benefit from specific skills of each of the team members.
---
Some of the previous answers argue that it will be difficult to decide in the end who is more responsible for the results, or who really made the work.
I think, as long as you do a good job, the people in your lab will notice, and they will see that you are dedicated to do your job, and that you are interested in doing research.
Moreover, the underlying assumption at this point is that science is rather competitive than collaborative.
As there is always a tension between competition and collaboration, the former one usually goes with separation as the latter one will lead to community. And as long-time separation is a breeding ground for anxiety and depression, having a strong community leads to security and a general well-being. (Admittedly, this is an assumption, but in my opinion a well-thought one.)
---
I, personally, was during my post-graduate studies in a working group with ~10 team members, where almost none of the PhD students could really talk with any of their colleagues as the topics of each of them were simply too far apart. Needless to say, that the atmosphere was kind of depressive. And most of them actually did not advise me to do a PhD for exactly that reason.
In essence, I think we only grew faster, personally and mentally, when we collaborate and encourage PhD students to do so.
Upvotes: 1 <issue_comment>username_11: There seem to have been many good answers already and (sorry!) I haven't gone through all of them.
In my mind, you hit the nail on the head right at the beginning when you said, "they make students work in pairs. I loved my final year research project". In undergraduate work, you work in pairs, but usually in your third or final year... Paired work usually comes at the *end* of your undergraduate studies.
PhD is at the beginning of independent research. You're looking at it as if it comes after your final year of undergraduate studies. So if you did paired work then, then surely the next step should be more paired work or even group work. But PhD is (IMHO) the first step of another journey and not the continuation of your previous journey.
After you obtain your PhD, you'll have ample opportunities as a post-doctoral fellow and beyond to work in pairs or groups. Quite literally, if you choose to remain in academia then for the rest of your life, you'll work with others so much you'll get sick of it... The PhD is the beginning of this journey and it's this brief part when yes, you work alone.
(Of course, not every PhD program is like this. It varies from university-to-university and country-to-country. From personal experience, once you open up "group work" right at the beginning of a PhD, many people will abuse it. Sure -- perhaps you're a good person and you wouldn't dream of it...but many, sadly, would. It would be like allowing paired work and open book exams in first year of your undergraduate studies.)
Lastly, we should separate the PhD degree from the ability to do research. Anyone can do research without a PhD degree. So, two people (i.e., A and B) who have been brought into a lab to work together can be hired as research assistants (well, the term varies from country to country...but basically, the only qualification needed is an undergraduate degree) and produce a unit of research as a research publication with join co-authorship. So, no university (AFAIK) is stopping joint, collaborative work...
Upvotes: 0 <issue_comment>username_12: TL;DR: people are different and you cannot apply one mold to everyone.
>
> Universities got it right when it comes to laboratory experiments or research projects during the degree: they make students work in pairs.
>
>
>
I studied physics and we worked in pairs. The real reason for that is that when you have X students and X/2 experimental setups, you have to pair people together.
I paired with a friend and it was fantastic: one week he would do the whole work, and the other week it would be me. (The TA was fine with that.)
Why? Because **we hated to actually work in pairs**. It was much more productive to do the whole work yourself because it was easy for us and we did not need to discuss the outcome (the experimental part was extraordinarily easy compared to the theoretical one where we often discussed the topics to try to understand).
I simply cannot imagine doing my PhD with someone else on the same topic. We would literally stomp on each other feet, or be constantly diverged from our thoughts by the other person. Discussing the progress is fantastic, once every few weeks or so.
Please note that this has little to do with being good or not. It is rather a personality trait.
I will also add that one dinner with a friend during my PhD completely changed its track because she suggested me something I did not think of, and it was revolutionary (for my thesis and a tiny little bit for the niche field I was working in). I praised her in the thesis even before my supervisor.
---
Fast forward to yesterday, 20-30 years later. I am now in industry and was working with one of my directors to remodel his team. One of the questions I individually asked the three best people in that team was *"do you want to work together (as in "at the same time") on a problem, or do you prefer to work on it by yourself?"*
The answer from each of them was a clear and sound "by myself". I asked why and they said that they genuinely like the other two (which is true), that they know that they are excellent (they are) but that they would be more productive concentrating on the problem by themselves and discussing/being challenged afterward.
Upvotes: 2 <issue_comment>username_13: I'll summarize my answer with: **A PhD is not the moment for that**.
Groupwork in undergraduate is extremely important because for those who do not follow academic careers it might be much more important to have social rather than technical skills. This is often said about many professions that are not highly technical, and at high-level management positions, technical knowledge is often of lesser importance compared to other skills, with great attention to managing and communicating with people.
During a Master's degree I'd expect someone to interact a lot with his/her advisor, co-advisor, and lab colleagues, maybe naturally find someone akin to what you describe as a research partner. Because at that moment the goal is to learn well all the stuff required to reach the edge of knowledge, to understand the state of the art, to learn how to stand on the shoulders of giants. So getting help and giving help within this context is good for everyone, as long as your master thesis remains, well yours. Some of this should still take place during a PhD, but some level of independence should also surface at this point.
I understand that a PhD is also a gateway that separates people who find research interesting in principle but are simply not fit for it (I find bodybuilding very interesting but have absolutely no place being a professional bodybuilder). It is a one-shot opportunity of demonstrating your capabilities to master the state of the art in a scientific subject, develop a rather complex project, execute it, make a verified contribution to the state of the Art, and last but not least take credit for it. You may disagree with this, but a lot of regulations in my country seem centered around those principles, such as you cannot ask for funding in a public agency unless you have a PhD (because the government does not trust you to conduct research with their money unless you have a PhD).
Once you've done it, society recognizes that you've shown yourself capable of:
* Understanding the state of the art in a scientific field
* Reviewing the scientific literature of your field
* Advising other students
* Managing research
* Reviewing proposed research articles
Now, you are eligible for several jobs in academia and in the industry where those skills are required, and guess what? You are in general not allowed to have a partner in all of those activities. The same way that the CEO position is very rarely shared between two or more people, any pure research or R&D project with grants, liabilities, and staff to manage require that one person to be ultimately responsible for some decision making, even if there is a group of people one can ask for guidance.
If your university forced or strongly incentivized students to pair up for PhD programs, can you really be sure that each member of every pair has demonstrated all such skills? I mean, if you feel at lost when you're not sure what to do with your research, you'll feel the same when you are no longer able to pay for a researcher's stipend because your project lost its grant, or if you have to fire an employee because the R&D project got canceled or a budget cut. The feeling is natural and not a problem, but handling the situation is the required skill.
To be clear, I'm not saying a PhD degree is the ultimate absolute proof of any of that, but it should be distinguished as a very strong indication of such.
Thinking back, I've seen many group projects in grad school where out of five students, one would do absolutely nothing, I've seen students who would often get help from one of the class top students and literally claim that they owed their (undergrad) degrees to that student. I've also worked with a guy in a company who was basically unable to do anything meaningful on his own.That guy enrolled in a Master's program and failed to deliver a thesis. Which was no surprise, as he always needed someone else to bridge gaps in his knowledge, plan things for him or review his ideas (preferably also correcting mistakes) and give him suggestions. When his advisor refused to hold his hand and act as a partner such as what you described, he just could not deliver. Do these descriptions paint him as someone who is really knowledgeable in his field, and thus able to manage research, advise students and review other people's work? Maybe you'd like to have a guy like this as a partner, but surely I'd prefer not to have him as an advisor or as a research manager. Let alone as a researcher in your university.
All that being said, once the PhD is finished, and this minimal demonstration of individual capacity is performed I'd see no problem in pairing up with another researcher for future work, no matter if for post-grads or tenured professors. Whatever is better than the sum of parts seems welcome if you ask me. However, once again, if either member of the duo decides to quit for a better position, gets hit by a bus or if the relationship sours, the remaining person is still expected in real life to be able to perform as an individual.
Upvotes: 1
|
2021/08/11
| 956
| 4,217
|
<issue_start>username_0: I submitted a paper to a mathematics journal last year and got a "revise and resubmit" type of response. I edited the paper following some of the referees' comments and suggestions, but the paper has changed quite substantially since then. I brought on a coauthor, reworked some of the basics (not at the suggestion of the referee), added a lot of material, and ended up more than doubling the length of the paper. However, certain core aspects remain from the original paper.
I would like to submit to a different journal because I feel the paper is now somewhat different and of a higher quality. I am worried though that there is some ethical obligation to stick with the original journal because of the labor that the editor and referees have already put in with the first version of my paper; otherwise it seems that labor is wasted with respect to their publishing interests.
**Should I stick with the original journal, or is OK to switch?**<issue_comment>username_1: There is no obligation. The paper is yours. Make your own decisions. They have no obligation to publish the revised paper and you have none to resubmit to them either.
They have spent some limited resources in helping you, but that is within their business model. The referees have done you a service, but many review for more than one journal. But that service is just one that we do for one another in the pursuit of a greater goal.
You haven't signed a contract. Make your own best choice.
If the option is open to you, formally withdraw the paper or inform the editor that you won't be resubmitting so that they can arrange their systems appropriately. That might also be an opportunity to thank them for their consideration.
Upvotes: 6 <issue_comment>username_2: I think it would be wrong to *intend ahead of time* to submit to one journal just to see what the reviews say, and then to submit elsewhere. Kind of like interviewing for a job you know you won't ever take just for interview experience.
Your circumstance is different: the paper has changed substantially and you don't see it as fitting this journal any more. You don't have a contract with them, the referees are taken from the academic community you belong to and aren't belonging to the journal. The editor's time is worth considering but overall that's a minor bit of effort for just one paper.
Your revised paper needs a thorough peer review from start to finish now. It's not merely a modification of the old paper if you've doubled the length and added a bunch of comment. I think you can feel free to submit it where you feel it is most appropriate. It could possibly be a faster process to stick with the same journal, but that may not be a strong concern of yours, and may not even be true given the extent of changes made.
Upvotes: 4 <issue_comment>username_3: I would feel some responsibility to the original journal, as the editor's ability to get the manuscript to the right referees was a factor in improving the paper. That said, to do as you described isn't all that bad.
Upvotes: 3 <issue_comment>username_4: Referees and editors usually aren't paid; your withdrawing the paper doesn't take anything that's theirs. And the typically parasitic publishing companies generally deserve no loyalty. Your loyalty to your profession - to get your paper in the most appropriate journal - should come first. If they see your paper in another journal, and see that their feedback has helped, they should smile. Of course the specifics for this journal could be different.
Upvotes: 2 <issue_comment>username_5: Your only obligation is to take the reviewers comments seriously and act on them before submitting the revised manuscript, which you have done (kudos - there are many authors that don't and just submit somewhere else).
Upvotes: 2 <issue_comment>username_6: Since you reworked some of the basics, independent of what the referees said: I think it's morally acceptable to submit elsewhere, and likely legal, though check the applicable laws involved. *NOTE:*
I am US-based. We submitted several papers that were somewhat similar several places, but they vary enough that it was valid to do so. This was decades ago.
Upvotes: 1
|
2021/08/11
| 3,427
| 14,794
|
<issue_start>username_0: I have many PDFs (articles, reports and other literature) in various folders, and most of them are also in Endnote, which I have always used as a reference manager. However, the references in Endnote are not linked to the PDFs (most of the references were downloaded from journals' pages, but Endnote can find the PDFs for only ~30% of the references I have in there) and I do not have any 'tag' for the papers. My plan to fix and have a more organized library is the following:
* Re-download references directly from journals (hoping that Endnote this time might find the PDFs for these)
* Link remaining PDFs manually, putting all PDFs (including the ones from the point above) in one folder
* Link PDFs to NVivo, so to add different tags to the same article and to search more easily through the literature
However, this manual process seems to me very slow and easily prone to mistakes. Therefore, a few questions:
* Can this process be automated somehow? Looking into the future, how do you go with downloading individual PDF and references and keeping the literature tidy?
* Are there other options for having all PDFs in one folder (possibly with file names corresponding to "author year title"), linked to a reference management software (Mendeley, Endnote, or Zotero?), linked to NVivo?
* More in general, **how do you manage your literature**?
Thank you very much in advance!<issue_comment>username_1: To answer your general question "How do you organise your literature?": I don't. I let websites such as web of science, scopus, medline and google scholar do that for me. It may not work for everyone, but I think it is worth seriously considering this option.
I only manage a "private library" of references while writing a review or article, and this library contains only the papers relevant for the publication. I found this to be sufficient: the useful references will be listed in the bibliography of the publication.
Other reasons not to manage your own library:
* libraries of literature already exist on the internet
* search engines are good if you know how to use them
* your library will quickly become too large to manage (unless you want to become a full-time librarian)
* both your interests and the relevancy of the articles in your library will probably change over time, leaving you with a big pile of articles you will never read again
* being well organised often means knowing what NOT to keep: hoarding papers is the opposite of organising a well-curated library
To put it bluntly: don't waste your time trying to get the perfect library. Not only is this nearly impossible, but there is probably no need.
I realise that there are exceptions, and that certain fields need extensive literature libraries, and that certain universities may not have easy access to online publications. This answer is not intended for those exceptions.
Upvotes: 3 <issue_comment>username_2: I have approximately 15K PDF documents organized roughly according to [Universal Decimal Classification](https://en.wikipedia.org/wiki/Universal_Decimal_Classification) (UDC) in about 2.8K folders. I synchronize these files across a couple different computers with rsync. I use UDC to get some basic organization and have many improvements and additions to the classification as I find UDC to have gaps and be illogical in many parts.
I also use [Zotero](https://www.zotero.org/) to manage citation data for the documents, though the folders and Zotero are not directly linked. The files are named consistent with the citation keys in Zotero, so, for instance, if I know an article has the citation key citation\_key in Zotero, I know to look for citation\_key.pdf in the folders. I have some helper scripts to automatically open a PDF file or its folder given its citation key.
The folder approach has many advantages:
* Much faster than citation managers or search engines.
* Future-proof. Presumably computers will have some sort of hierarchical file system in the future. (Also see below about losing journal access being a reason to keep PDFs.)
* Uses the existing file system, which means no additional software is required. This also means there is no vendor lock-in. And you can use a wide variety of existing software to do local search, for example, using the native search tools, [Everything](https://en.wikipedia.org/wiki/Everything_(software)), or [Recoll](https://en.wikipedia.org/wiki/Recoll).
* I can get hyperspecific on the topic. To give an example, [dimensional analysis](https://en.wikipedia.org/wiki/Dimensional_analysis) is an interest of mine. Most classifications might have only one subdivision for the subject. I have about 150.
* I personally find that organizing documents helps improve my understanding of a subject. It often takes a fair amount of knowledge to know how to categorize something.
* When compared against tagging, using a folder hierarchy is basically "tags with inheritance". This makes getting very precise easy as you can simply place a file deep in the hierarchy. To do the the same with tags, you'd have to apply more tags, which usually takes longer in my experience with software that uses tags.
A common argument against a hierarchy is many documents or parts of the hierarchy should appear in multiple parts of the hierarchy. I get around that with shortcuts/symlinks/whatever your computer calls them. I have about 3.3K symlinks at the moment. Yes, this is not ideal, but in practice it works fine.
My approach works for me, and I know from [r/datacurator](https://www.reddit.com/r/datacurator/) that I'm not alone. Many people disagree with this approach, and that's fine. Different approaches work for different people. With that being said, I want to respond to username_1's answer:
>
> libraries of literature already exist on the internet
>
>
>
Why should one be limited by what's online? There are gaps and flaws in any database, and there are many documents that have never appeared online. My own personal collection fills in these gaps and corrects the flaws in my field and is a competitive advantage I have over others.
Also, your journal access may change in the future. Keeping copies of papers ensures that you don't have to figure out how to get them later if your journal access changes.
>
> search engines are good if you know how to use them
>
>
>
As I've said, all databases have gaps.
Also, don't overestimate human search abilities and memory. There have been many times where I remembered that an article existed on a particular topic, but I couldn't find the article easily or at all.
And academic search engines today focus heavily on text search. I know from my experience searching patents that classification search is essential to get around the problem of unknown synonyms in text search. For more esoteric topics like those studied by academics, I haven't found semantic search engines like Google Scholar to consistently understand what's a synonym, so that's not a replacement. One way to do classification search is to make the classification yourself, as I do. (Another disadvantage of text search is information that doesn't appear in the text or is difficult to find with text search. This includes information in figures and anything that requires some sort of analysis to determine.)
>
> your library will quickly become too large to manage (unless you want to become a full-time librarian)
>
>
>
This is merely an assumption. In my experience, 15K documents are manageable with a reasonable organization scheme. (The organization scheme doesn't need to be folders. I think tags could work great too!) I believe I could handle an order-of-magnitude more documents without much trouble. I see many others put everything in one giant folder or search online to get papers again, but to me either approach is unmanageable. I couldn't manage a couple hundred documents in one big folder.
The time spent organizing the documents is not that large and pays off in the long run, I think. I work a full-time non-research job and I completely reorganized my document collection late last year on my own time. (That was when I switched to UDC.)
>
> both your interests and the relevancy of the articles in your library will probably change over time, leaving you with a big pile of articles you will never read again
>
>
>
Yes, there are many documents I have saved that I may never look at again. It doesn't cost me much to keep those documents. As stated, one's interests might change, but that's not necessarily an argument for not saving documents. If I've already done the work to carefully classify the papers, I might as well keep them in case they end up being valuable in the future. My interests don't change randomly, and I am far more likely to become more interested in something I have experience with than some random topic unrelated to anything I've done. Quite a few times I had a small amount of interest in a topic at one point, but this interest grew a lot, and I valued the documents I had saved already on that topic. Many of those documents I might not have found again, or I might not have made a connection that would have led me to classify the document how I did the first time.
>
> being well organised often means knowing what NOT to keep: hoarding papers is the opposite of organising a well-curated library
>
>
>
I don't save documents indiscriminately and don't recommend doing so. Some documents are more important than others, yes, and I highlight those with README files.
---
I wrote [more details about my system at r/datacurator](https://www.reddit.com/r/datacurator/comments/p75xlu/how_i_organize_about_15000_research_papers/).
Upvotes: 3 <issue_comment>username_3: What you are trying to do makes a lot of sense, but in practice, it might be quite difficult to achieve. I will try to give a high-level explanation as to why I think so.
The key issue is tha any software that manages PDFs must maintain control over the exact file location, including the file name. This does not mean that they must dictate in which folder you place their files (on the contrary, most will let you specify a root folder of your choice), but they must be able to control the precise names of the files. Suppose, for example, that NVivo is trying to track the same exact PDF files as EndNote. Well, if NVivo changes any file name, then EndNote completely loses these files, resulting in broken file links in EndNote. And vice versa: if EndNote changes any file names, then NVivo loses these files. Thus, unless one software is integrally synchronized with another such that it is aware when the other changes any file name and then can automatically update its own records, two software systems cannot share the exact same PDF files. This is a simple restriction, but it is rather definitive--because of this, shared PDF folders are not very feasible.
Although I doubt what you would like could work, I can suggest an alternative workflow as a second-best option. You could consider EndNote as the primary manager of all your documents, so it would be the default and definitive database of all your files. (You would need to manually attach many PDFs as you described in your question.) Presumably, you do not work on all your documents during a literature review project. Perhaps you have 1000 documents total, but work on up to 100 or 200 in one literature review project. Then you would need to import (copy) those 100 to 200 documents into NVivo, and then take advantage of its features to do the in-depth literature analysis.
I know that this solution involves duplicating many articles (in my example, 100 to 200), but I do not see a way around it. I use different software from you (I use Zotero instead of EndNote and Excel instead of NVivo), but when I have such needs, I duplicate my PDFs in this kind of way.
Upvotes: 1 <issue_comment>username_4: I have been using [Mendeley](https://www.mendeley.com/) computer app for a long time now (which you have mentioned, so I am not sure this is a valid answer, but it seems to me that this might be a under-used tool in your case). It allows you to organize your pdf files in a customized folder (i.e. you import it and then it is copied to a custom folder with a custom name just as you mentioned such as author\_year -- you must configure these features of course). When you do a search it shows not only the titles, but also the body content with the search you performed.
Additionally, it has a bunch of features that are suitable (for me at least): you can configure it to generate bib files with your references (in a single huge bib or individual bibs -- which is important for LaTeX users); you can confirm the article's information using its doi for instance (I find this extremely helpful, especially with older pdfs that are not automatically recognized because they have been scanned instead of created digitally); you can synchronize it with all the computers that you use Mendeley and even access your account online if you don't have it installed.
I highly recommend it! This has been an amazing tool for me.
Upvotes: 0 <issue_comment>username_5: This may be an overkill for the case, but after >15 years in academia and many organization systems, ranging from fully manual folder structures to mediocre automation system based on citation managers, as well as committing to software that stopped being developed (I’m looking at you Sente), I’ve found that the most productive and useful system for me is to use DevonThink Pro as my main database of PDFs. DT allows tagging, notes, custom metadata (like proper citation info), full-internal PDF search, and operates like a GUI file explorer. I then use Bookends (though you could use any reference/citation software for this part) for specific project/paper bibliography lists, so that I can scan my working document for citation bangs and generate a properly formatted reference lists and in-line citations, depending on the necessary correct citation format.
I find that, at least for me, this works WAY better than having a meta-library (in whatever reference software - and I’ve tried them all) of every reference (with or without attached PDFs) I’ve ever come across and found interesting or relevant, with sub-libraries for individual projects.
However, the caveat is that I use MacOS for these tools and it is hyper-specific to my needs, which involve being able to search within >30,000 PDFs (some academic, some archival) for relevant information on the one hand, and then having a very specific list of actually cited documents, with correct citation data, for unique/project or paper-specific reference lists, on the other. Just my two cents based on experience.
Upvotes: 2
|
2021/08/11
| 1,610
| 6,590
|
<issue_start>username_0: I just finished my second year in a social science PhD program and have only one semester left to finish my coursework. Last semester my advisor A told me she was gonna take a job offer from another university and could not bring me because of lack of funding. I was her only PhD student, and she was the only person whose research aligns with mine, so I was quite frustrated. This summer I asked another professor B whose research is kinda close to mine, and she took me over as her advisee, which I really appreciate. They are both very nice to me, but I realized that prof B does not really know a lot about my field, she is very kind, but the problem is we do not work in the same field.
So I am considering whether it is possible to apply to another program. Both A and B are very nice people, especially B, so I really don't want to hurt their feelings or bother them too much. My department faculty members are generally very nice, so I feel guilty about planning to betray them. FYI I'm an international student, so I couldn't really take a break to work or do something else. So here are my questions:
1. I know applying to another program while being enrolled in the current phd program does not look good. Is there any way to make it look not that bad? Will this piss my current professors off?
2. Shall I tell prof B about my thoughts? She is the dept chair, if I tell her about my plan, is it inappropriate to ask for her recommendation letter?
3. When shall I inform my former advisor A and current advisor B? We will have a committee meeting with two other committee members in September, shall I tell them before or after the meeting? I saw people suggesting that you should never reveal it to anyone unless you get an acceptance letter, but would that be a dishonesty issue?
4. Shall I specify in my personal statement that I'm in another program and the reason I want to leave?
5. If A and B are pissed off, what would happen? How bad could it be?<issue_comment>username_1: I think you are overly pessimistic. Yes, you can apply elsewhere and you have perfectly good reasons. Yes, your current chair (and advisor) should support you in this. Yes, you can be honest about your reasons. The "program" left you, it isn't your doing.
Under the circumstances I would hope (and expect) that you get good recommendations for a move.
I doubt that 5 is going to happen. I would consider it student abuse if they don't support you.
But, also consider all options, including changing your sub-field slightly if there is someone else who can support you. The issue is the time to completion. Changing schools might require more time (or not). The other school will have its own requirements, such as for qualifying exams, that you need to meet.
Give up all thoughts of "betraying" anyone. Your career should be high on their list of priorities under these sorts of circumstances. Good luck.
I'll also note that a lot of students are only now (end of coursework) getting around to choosing an advisor anyway. Perhaps you can get a MA for your efforts on your way out the door, but now, rather than later is the time to move if that is your decision.
Upvotes: 3 <issue_comment>username_2: >
> I know applying to another program while being enrolled in the current phd program does not look good. Is there any way to make it look not that bad?
>
>
>
On the contrary, this only looks bad if you were unsuccessful in your first grad school. But in your case, the only professor who works in your subfield left; this is a perfectly reasonable reason to transfer. Especially if your coursework performance is acceptable. You should definitely see if you can get a master's degree before you leave.
>
> Will this piss my current professors off?
>
>
>
You say they are nice, reasonable people, so I seriously doubt it would be an issue. B might be disappointed if they were excited about working with you. But advising a student (especially one in a different subfield) is a lot of work, so there's also a chance that B will be relieved. And certainly it's better (from their perspective) to make this decision now rather than three years from now.
>
> Shall I tell prof B about my thoughts? [When?]
>
>
>
I would recommend approaching this as a conversation. Rather than telling B that you are going to leave, diplomatically explain your concerns, float the possibility of applying elsewhere, and ask for her thoughts. This approach leaves you some ability to back-track if she responds poorly, and it gives her the opportunity to find an alternative solution (e.g., maybe she can bring in an external co-advisor who works in your subfield).
>
> She is the dept chair, if I tell her about my plan, is it inappropriate to ask for her recommendation letter?
>
>
>
Absolutely appropriate.
>
> Shall I specify in my personal statement that I'm in another program and the reason I want to leave?
>
>
>
Yes, and your letters writers should do this too. The new school will certainly see from your transcripts that you are currently in another program. Your reason for wanting to leave is a perfectly good one, but if you don't provide it, they may fill in the blanks with something less good.
>
> If A and B are pissed off, what would happen? How bad could it be?
>
>
>
Theoretically, I guess they could withhold letters of recommendation and B could stop advising you. But realistically, I think the main risk is that if you say you want to leave and request letters but then don't get a good offer somewhere else, you'll be in a somewhat awkward, uncertain place.
Upvotes: 2 <issue_comment>username_3: At this stage of a PhD you have your own ideas about your research and are learning to fly on your own. You (probably) still need 'advice'and encouragement from time to time but you don't need day to day supervision like you did when you arrived and it was all new.
It looks to me like there us a sensible way forward with B as your formal supervisor for the purposes of university paperwork, but continuing collaborating with A as a colleague, working and publishing together, through emails and zoom, much as you would have done anyway. A is then getting some free help in her research field so everybody is happy.
Your department owes you. You've given them 2 years, they have a duty to take you the rest of the way so they shouldn't object to your working with someone from another institute. It is their problem, and you should absolutely not be forced into transferring/restarting or bailing out with an MA.
Upvotes: 1
|
2021/08/12
| 456
| 1,996
|
<issue_start>username_0: I work in a lab on a variety of projects using a variety of techniques. For a recent project I was thinking we are probably going to use fancy machine learning method X which I am vaguely, but not intimately familiar with.
I have no doubt that there is a plenty of information about X online and that there are no shortage of free resources via a variety of means. However, there is also a new text which covers X among other topics that I was going to buy anyway. Is it appropriate to bring up the new relevant text with the professor with the hope of getting a copy from the research budget seeing that it is relevant to our recent project? Or is it better for me to just buy my personal copy out of pocket?<issue_comment>username_1: Yes, it is appropriate to ask to buy books relevant to research with research funds. Professors often do so for their own use. It's certainly possible your professor would prefer to save the funds and rely on other resources available freely or through the university's library, but that's not a reason to avoid asking.
Upvotes: 5 [selected_answer]<issue_comment>username_2: I would say yes. However you may want to check your (university) library as well. They regularly buy books that are in need of researches and they may have a procedure for that.
Upvotes: 2 <issue_comment>username_3: I would check with the library first, also check how long you can use the book if they do buy it for you. The book will become the property of the library, so you need to hand it in afterward. If that doesn't pan out just ask the professor what the procedure for books you use in your research is. The answer will let you know if it is customary to buy it for you, or if you have to buy it yourself. Also ask if you have to return the book after using it, just to be sure. Before all that, find out if this is interesting for others in your team. You might be able to study it together, to stay motivated and to inspire each other.
Upvotes: 1
|
2021/08/12
| 2,741
| 11,057
|
<issue_start>username_0: The kind of full-face veil I'm thinking of is the [Niqāb](https://en.wikipedia.org/wiki/Niq%C4%81b) - it covers the entire face except the eyes.
I don't have any ideological objections to people who choose to wear the full-face veil. However, by its very nature, the veil makes it hard to hear what the student is saying since it muffles the voice.
If an instance arises where such students are hard to hear, is it appropriate to ask them to speak louder? I notice students who wear full-face veils already tend to be rather silent, and if doing so makes them even less likely to speak up, it might be counterproductive.
**Edit**: the alternative is to walk closer, which could be intimidating.<issue_comment>username_1: Maybe you could talk to the student at the end of the class and tell her the problem. Maybe you could start with discussing the question/idea that she proposed during the class and then lead on to tell her the problem.
I think it would be perfectly fine to tell her, what you wrote above, that you don't have any ideological/religious objection, that you perfectly respect her religious beliefs or norms, but it just muffles the voice, so you have trouble hearing what she is trying to say.
Also, I think if she speaks up now, that means she is confident about herself and feels comfortable in your class and with other colleagues or classmates that are present in your class. So, if she doesn't mind speaking up now then after you tell her to be a bit louder, I don't think it would make her less likely to speak up next time.
Even for people who don't wear hiqab/full face veil, we sometimes have trouble understanding or hearing what they said (for whatever reasons) and we ask them to speak louder or repeat themselves; and that seems very casual and obvious response for a person who is trying to understand them. That also means you are paying attention to the speaker and not ignoring them. When you ask them to be louder or repeat themselves, nobody judges you or questions you why you asked them to speak louder or repeat themselves. I think same thing might happen in this situation as well, unless you ask her to be louder every time she speaks. But, in the latter case, I would discuss the problem with the student directly.
Upvotes: 2 <issue_comment>username_2: It is appropriate to ask a student to speak louder if, and only if, that is necessary for you to hear and understand what they are saying. Of course, I assume you will make the request politely and in a friendly tone of voice.
This applies to all students regardless of the specific reason why you are having trouble hearing them.
Don’t bring up religion or religious garb, they have nothing to do with the issue. Don’t say you “believe in tolerance”, “don’t have a an ideological objection” to anyone’s choice of what to wear, that you “respect everyone’s religious beliefs”, or anything of that sort that conflates the practical issue with irrelevant underlying causal effects. Such assurances may be well-intended and motivated by a desire to make the student feel at ease with your request, but they could easily have the opposite effect and be perceived as treating the student differently based on her religious beliefs. The best way to make the student feel respected is to treat the matter as the purely practical issue that it is (and to treat the student’s religion as the irrelevant factor that *it* is, by not referring to it).
Now, if this approach I’m recommending still has the effect of causing the student discomfort and making her avoid asking questions, that is not your fault. It is too bad, but even the responsibility of a teacher to be as supportive as possible has its limits. Teaching is a two-way street, and if a student can’t speak loudly enough so you can understand what they are saying, for whatever reason, your ability to offer them effective instruction is unfortunately going to be somewhat limited.
Upvotes: 8 [selected_answer]<issue_comment>username_3: >
> I notice students who wear full-face veils already tend to be rather silent, and if doing so makes them even less likely to speak up, it might be counterproductive.
>
>
>
Perhaps, but I also can't imagine it's productive if they speak up but you can't hear them (and therefore presumably can't respond properly to their question/comment). If you can't hear what this student is saying, for whatever reason, then I see no reason that it would be impolite to ask her to speak louder. In my case, I have quite bad hearing anyway, so I sometimes ask students to speak up, and I often accompany this by telling them that my hearing is not so good. Sometimes I have to do this a few times if a student is speaking softly. If you would like to "soften" the request to speak up then you could put the blame on your bad hearing (as I do).
As a secondary matter, I see the duties of a good academic as going beyond the teaching of their narrow specialty area, also encompassing the teaching of the kinds of adult soft-skills that separate professionals from young adult students at university. If I encounter a student who speaks too softly to be heard (or in this case too softly to be heard through an impediment) then I consider it incumbent on me to encourage this person to remedy this soft-skill deficiency (i.e., they need to speak louder). If we were in a theatre class then the student would be encouraged to "project their voice" and so I would encourage the same. I know that when the student gets into the professional environments she may have to speak to crowds, and this will require a confident audible speaking voice. By encouraging this in the university learning environment, it will not be a stretch for the student when she gets into a professional situation where she really needs it.
Upvotes: 4 <issue_comment>username_4: For a long time, this kind of problem would have concerned only the few fully veiled persons, but in times of COVID and everyone wearing masks in a lot of conversational situations, the issue that you cannot (fully) understand what people are saying because their mouth is covered has become more ubiquitous. A cloth (of whatever nature worn for whatever reason) in front of the mouth impedes understandability not only because there is a barrier for the sound waves, but also because lip movements (which might enhance the understandability) are also obscured. If you would not understand a mask wearing person, you probably would not think twice to ask them to speak louder, and I don't see this situation as any different. It is completely appropriate to ask someone to speak lowder because their mouth is covered.
Upvotes: 4 <issue_comment>username_5: Let me mention a related issue and a possible solution to this one. I'm very deaf. The nature of it is that, even with a hearing aid, conversation, even in ideal circumstances, is nearly impossible. The level of sound is increased, but not the clarity needed to understand. But I partially lip-read. I've had to ask doctors to remove their masks in consultations, though we step well away from one another. Only then can I understand their instructions.
For many people and in many situations, mask or other face coverings can't or won't be removed. And some people just have learned to speak very softly, for whatever reason.
But, written communication can help. If electronic mediation is possible, students can type questions and responses, perhaps even anonymously.
An instructor in a face-to-face situation can carry a few index cards, perhaps, that can be give to students to write out questions, or whatever.
But there are also microphones that can be employed. There is one kind that feeds directly (bluetooth) into a hearing aid. It can be handed to a student that has a question (face-to-face).
So, old (paper) and new (electronic) tech can be used to mitigate issues, including the one raised by the OP, but also beyond that.
---
I don't know enough about cultural norms to know whether the request to speak up would be interpreted as insulting. I can't answer the question here directly, but suggest that you need to explore those norms if you really want to keep "connected" to your students.
Upvotes: 3 <issue_comment>username_6: ### Just say, and don't sweat it
Ironically enough, this is a problem which Covid has made a non-issue. ***Everyone*** has been wearing face masks, and many of us still are in enclosed spaces. We all know how masks make your voice less clear, and we're all pretty used to saying when someone's mask is affecting you hearing them clearly.
Sure, her reason for wearing a face covering is different from the rest of us - but the effect is the same. Do the same as you would for anyone else wearing a facemask, and don't worry. You're not discriminating against her, because you're applying the same criteria which you've been applying to everyone else for the last 18 months.
Upvotes: 4 <issue_comment>username_7: You could try asking in a way that does not single out any particular person, and that puts any "blame" in places that won't make students feel defensive. For example:
>
> Hi students. I'm an old person with old ears. And when
> I'm up at the front of the class there are air circulation fans up here in the room
> and in the overhead projector and so on. (Or in my desktop computer in
> these times of the COOF and zoom meetings and such.) So please try to
> speak loud enough for me to hear you. If I say "sorry, please say again"
> I am not trying to be unpleasant. It's really that I cannot
> hear you. If it happens, please try speaking a little louder.
>
>
>
This is a thing that can cause friction between many cultures. And I'm not sure there will be a one-size-fits-all solution. But blaming yourself and the room is a good strategy.
Upvotes: 3 <issue_comment>username_8: We don't hear your student, but it may be an issue *regardless* of wearing a mask, or a veil, which is independent from loudness of a voice: sloppy accentuation and lazy articulation. To my observation, recent confinements and extended periods with lesser in person discussions just worsened the situation for many of us even with «mask off».
So, purchase a bunch of corks e.g., from the arts-and-crafts section of your DIY shop where you get clean ones, which never touched any beverage, nor alcohol:
[](https://i.stack.imgur.com/OrxZN.png)
([picture reference](https://rads.stackoverflow.com/amzn/click/com/B07YCZGXMN))
Then get *all the students* to speak with them (e.g., [French demonstration](https://www.youtube.com/watch?v=6g3JqnEMrYY), [English example](https://www.youtube.com/watch?v=C0g9oXR0GPI)) since we all benefit from proper articulation. It is a training seen e.g., among students of music and theology. Just think about preachers who need to be both audible and intelligible in houses of worship till the rear benches and can not rely on the presence of microphone and loudspeaker.
Upvotes: -1
|
2021/08/12
| 926
| 3,324
|
<issue_start>username_0: I found a paper in a top journal in my area [here](https://academic.oup.com/rfs/article/32/7/2587/5079300?login=true). I have a question regarding the inconsistency in citation displaying. For example, somewhere they use **(author year)**, somewhere they use **author (year)** within a single paper cited as below:
>
> The breakdown of collusive activities that involve higher prices and
> restricted output is likely to result in the expansion of production.
> It might also lead to technological change: colluding firms have fewer
> incentives to innovate, especially when they face little threat of
> external competition **(Vives 2008)**
>
>
> Indeed, **Xu (2012)** examines the effect of higher import penetration
> (instrumented by tariff cuts and exchange rate changes) on leverage,
> and finds that leverage drops even controlling for current
> profitability.
>
>
>
I am using Mendeley (Mendeley-Desktop-1.19.8-win32). When I cite a paper, it will display as **(author year)** automatically as below:
[](https://i.stack.imgur.com/eTH87.png)
If what the author did above is right, how should I modify my Mendeley to make a similar display? It is possible to manually adjust the reference display for every single citation, but this doesn't seem to be an efficient approach.<issue_comment>username_1: In the first instance, the authors of the paper in question simply cite the source. In the second instance, they actively name the source with the author in question (Xu in your example) being the subject of the sentence, which is why their name is not in parentheses. To avoid this seemingly inconsistent citation (although it isn't that inconsistent after all), you could also write it like this:
>
> Indeed, **Xu** examines the effect of higher import penetration (instrumented by tariff cuts and exchange rate changes) on leverage, and finds that leverage drops even controlling for current profitability **(Xu 2012)**.
>
>
>
Like that, you would not have to change anything in Mendeley either.
Upvotes: 2 <issue_comment>username_2: Their citation style is correct. The rule is that if you are just giving a citation (as in their first paragraph) and the names of the authors do not form part of the sentence, then the whole citation appears in parentheses.
However, if the names of the authors are to be read as part of the sentence (as in their second paragraph), they appear outside the parentheses and only the remainder of the citation is inside the parentheses. This is to avoid redundancy in e.g. "Indeed Xu (Xu 2012)..." which would be the other option.
See, for example, [the Mendeley Harvard referencing guide](https://www.mendeley.com/guides/harvard-citation-guide), which gives examples:
>
> When citing a source with two or three authors, state all surnames like so:
>
>
> Mitchell, Smith and Thomson (2017, p. 189) states... Or
>
>
> (Mitchell, Coyne and Thomson, 2017, p. 189)
>
>
>
What you have in "apart from that (Dong, Massa and Zaldokas 2019) prove..." is incorrect. The sentence should make sense even when what is inside the parentheses is removed; even ignoring that rule, (Dong, Massa and Zaldokas 2019) is a paper, not a group of people.
Upvotes: 4 [selected_answer]
|
2021/08/12
| 641
| 2,355
|
<issue_start>username_0: A professor has invited me for an interview for an internship he is offering. He has asked me to present a paper(that he chose) for 30 minutes.
The paper he gave is an object detection paper which is not exactly easy for me.
I am quite nervous for what would be his expectations from an intern.
In general, am I expected to know everything from the paper and be well versed in the field?<issue_comment>username_1: In the first instance, the authors of the paper in question simply cite the source. In the second instance, they actively name the source with the author in question (Xu in your example) being the subject of the sentence, which is why their name is not in parentheses. To avoid this seemingly inconsistent citation (although it isn't that inconsistent after all), you could also write it like this:
>
> Indeed, **Xu** examines the effect of higher import penetration (instrumented by tariff cuts and exchange rate changes) on leverage, and finds that leverage drops even controlling for current profitability **(Xu 2012)**.
>
>
>
Like that, you would not have to change anything in Mendeley either.
Upvotes: 2 <issue_comment>username_2: Their citation style is correct. The rule is that if you are just giving a citation (as in their first paragraph) and the names of the authors do not form part of the sentence, then the whole citation appears in parentheses.
However, if the names of the authors are to be read as part of the sentence (as in their second paragraph), they appear outside the parentheses and only the remainder of the citation is inside the parentheses. This is to avoid redundancy in e.g. "Indeed Xu (Xu 2012)..." which would be the other option.
See, for example, [the Mendeley Harvard referencing guide](https://www.mendeley.com/guides/harvard-citation-guide), which gives examples:
>
> When citing a source with two or three authors, state all surnames like so:
>
>
> Mitchell, Smith and Thomson (2017, p. 189) states... Or
>
>
> (<NAME> Thomson, 2017, p. 189)
>
>
>
What you have in "apart from that (Dong, Massa and Zaldokas 2019) prove..." is incorrect. The sentence should make sense even when what is inside the parentheses is removed; even ignoring that rule, (Dong, Massa and Zaldokas 2019) is a paper, not a group of people.
Upvotes: 4 [selected_answer]
|
2021/08/12
| 801
| 3,451
|
<issue_start>username_0: I am a little more than a year away from my funding running out (not taking into account any Covid-related extensions which may or may not be granted and thus can't be relied on). Hence, I would consider this date the target end date for my PhD after which I would like to move on from my current institution.
I have identified a handful of potential advisors for postdocs (in Europe, mostly). How early should/can I get in touch with them? Obviously, I would like to do it as early as possible (as there is a chance that means I can plan better). However, I don't want to do it too early to the point where it would seem weird / be pointless.
I am in science doing my PhD in Europe, should that be of help.<issue_comment>username_1: One year in advance is definitely a good point in time to reach out to potential post-doc supervisors.
* Some of them might in the near future have funding available that they could use to fund your position, or know colleagues that do. Then it's useful if you're already on their radar.
* Some might be interested in supporting your own project proposal which would lead to a self-funded position - note that it can easily take a year from the start of the proposal writing process to the decision.
Upvotes: 2 <issue_comment>username_2: In the fields and countries I am aware of in Europe, openings for post-doc positions will generally be advertised. There is little to be gained by cold calling potential advisors to look for one. The more likely outcome is annoying them (especially if they don't know you).
Of course, there is nothing wrong with letting you network know that you will be on the job market, and asking them if they know of any upcoming openings. Also make sure you are on any relevant mailing lists through which positions in your field may be announced (e.g. COST networks if they exist).
More productive may be to look for any open calls for fellowship applications that you may be eligible for. For example, the Marie Curie program of the EU, but also look at what the national funding agencies in the countries you want to go to may have to offer. You can approach potential advisor with the question if they want to sponsor your application. This shows initiative from your end. And even if the application(s) end up failing, it will put you on the radar of the potential advisor for any openings they may have in the future.
Upvotes: 0 <issue_comment>username_3: I would advise you to start as early as possible once you know in which direction you want to go.
Even if it means enlarging your network for now more than talking about postdocs.
I am a few months from my dissertation (with no date yet so might be a year from now with the covid situation) and I already started contacting potential post-doc advisors. So far, I have not got any negative feedbacks.
It is a good way to show your interest in their research and your interest in working with them. Some will tell you they will retire soon and they might redirect you to someone else. Some will inform you that they are starting funding applications soon.
It is a good way to see who works with who and build that network little by little.
If you want to plan strategically, I'd say you could start as soon as you know when you will have your dissertation, and a few months before the funding calls so that if they want to work with you you could already work on funding applications.
Upvotes: 2
|
2021/08/12
| 321
| 1,226
|
<issue_start>username_0: I got used to using 'i.e.' and 'e.g.' to give more details or give examples, but I have been told that I overuse them. When is it appropriate to remove them? When should I absolutely use them?
I feel I should use it all the time if I use one of those at least once in a manuscript.<issue_comment>username_1: If you are told by an editor or a reviewer that it is too much, then make a change to satisfy them.
Otherwise don't worry too much about it, though it is a good idea to keep such things as overuse of any sort of technique or phrasing in mind.
The key is that the language should flow. If it gets stilted in any way, make an adjustment. We all probably need to improve.
But note that taking them all out and replacing them with "that is" and "for example" might seem just as stilted.
Upvotes: 3 <issue_comment>username_2: You are writing in English, not Latin. You should avoid using e.g or i.e. That's what my high school English teachers told me.
If replacing them with "for example" and "that is" makes your prose too stilted, then you need to reword your sentences.
Writing is not easy. Unfortunately, artful writing is not particularly rewarded, so few academics bother.
Upvotes: -1
|
2021/08/12
| 3,720
| 15,800
|
<issue_start>username_0: Last term I was supervising a very good student for his B.Sc. graduation project. Upon his request, I suggested the topic and gave a clear reading list, tasks, and a roadmap for the entire project. I know that had he followed it we would've ended up with a paper (or at least achieve some preliminary results that could become a paper later on), and he would've acquired many new skills (I've already supervised other students and done the same). But this one was very unorganized and rebellious. For example, he would repeatedly read something that is totally outside the scope of the project and wants me to discuss/explain it to him (he's interested in the field of research I'm working on), he would skip some of the weekly meetings and not write his reports with the excuse that he wanted to read more.. etc.
He's a smart student and can learn on his own, but he does not have any experience in research and he does not understand why I am managing the project like that, even though I explained it several times. Eventually, he dropped out of the term for personal reasons. Now he's contacting me to see if he can work with me again.
The student is smart as I said and I can see he is keen to do post-graduate studies in the field I work in, but I find his personality very unstable and my experience with him is very unpleasant.
How do you support such a student? Or should I do it? Or do you think it's better to avoid him?<issue_comment>username_1: If your experience with the student has been very unpleasant in the past, then maybe you should discuss the problems and expectations directly with him this time.
The student is smart that's a good thing, but I think personality, manners, discipline, etc. are also very important. Just by being smart you won't be able go very far in academia if you can't respect others and their opinions, don't know how to work or collaborate with others or be on the same page as other people in your team (with regard to research).
If the student is unstable, then maybe some counseling may help, but there's no guarantee that he won't continue to defy your "orders" or instructions in the future.
Despite your unpleasant experience in the past, if you're still willing to work with him, then maybe you should talk to him first before hiring him as your student and clearly get to know his intentions and future plans and establish a clear protocols as to how both of you would be working. Maybe being a little strict this time might work. Also, if you're not really comfortable with the student then chances are it's gonna make your life difficult and build-up some unnecessary pressure on you.
By your description of the student, it seems to me that he gets distracted by other things easily and that might be a bit problematic in completing the project and meeting the deadline.
Upvotes: 3 <issue_comment>username_2: **My answer:** Set clear expectations for the working relationship, and move on if the student's expectations do not match what you can or want to provide.
I would recommend meeting with this student to discuss their goals and expectations. Why do they want to work with you? What do they expect to get out of this experience? In their opinion, what can you do to help them achieve their goals?
During this conversation, you should also express your expectations. What do you expect the student to learn? How can you help the student achieve the goals they have expressed? What are **you** expecting to get out of this working relationship?
If it becomes apparent that the student's expectations do not match up with yours, you can try to work towards a compromise. However, if there is no suitable compromise that you can both agree on, it is okay to move on from the relationship (provided you are not contractually obligated to keep mentoring the student for some reason).
It is important to be flexible with your mentees, because every individual has their own goals and working methods. On the other hand, mentorship is not a one-way street, and you do not have to bend to the whims of your student if they're asking for something you cannot provide.
Upvotes: 4 <issue_comment>username_3: The style you offer is a style that works for certain types of methodical students. You impose a quite rigid learning style on the student. For creative, fast, curious students, this may be quite a tedious process and demotivating. They need something where they can roam freely, at least at the beginning - the tricky part is to get them to converge on a concrete topic and outcome towards the end.
My recommendation is to develop a topic area with the student, brainstorm, direct them to resources, and let them define (with guidance) possible outcomes of the work. Let them run with it and be available for feedback and guidance, but do not force it. Only if the project does not go well after, say a third or half the time, you start constraining the search and impose a more constrained approach.
I would also state this upfront so the student understands that the freedom to roam freely comes at the cost that it needs to show results, the alternative being a guided project, so that the conditions of the collaboration (this is how I would treat it) are known upfront and they can decide whether or not they wish to work this way.
Smart students going off on their own can spectacularly fail (that's why you need to catch it mid-project in case you need to invoke a Plan B) or they can spectacularly succeed.
Upvotes: 7 [selected_answer]<issue_comment>username_4: I'm going to interpret the unpleasantness you are experiencing as something that comes only out of the factors you list, so I will assume that there is no unstated cause of unpleasantness in regard to the personality of the student. If that is correct, I agree with other commentators who suggest that the style of instruction you are using might not suit this student, and you might therefore have a more pleasant experience (and get better results in the long-term) if you use an alternative style.
I speak here as someone who recognises myself in this student. I was a bright student, but ever since high school I was *always* reading/learning something different than what I was supposed to be reading/learning, and merely treading water in my formal classes and projects. I was definitely also "rebellious" in the sense you describe, and I was not the easiest student to supervise. The self-directed learning style I used had pros and cons: it came at the expense of difficulties in formal courses and projects (and lower grades than peers with similar ability), but it also meant that over time I found that I had developed much broader knowledge and skills than most of my student peers, even the ones who got better grades than me.
I recall that in the early days of my PhD candidature I got a bit frustrated with my supervisor giving me lots of regular homework on a topic which I had only mild interest in, which was making it difficult for me to find time to read and learn things of interest to me. He was a good supervisor and was doing roughly what you are doing --- trying to drill me with work that he saw would lead to a particular outcome. We had a nice chat and he agreed to drop down the homework and give me "more rope" in my explorations. As to whether this was good for me, well, I led myself down plenty of blind alleys (see e.g., [here](https://academia.stackexchange.com/questions/159450/159452#159452)), but I probably couldn't have survived my candidature any other way.
In this case, I would recommend you consider whether you can "work with" a student like this in a way where you do not feel a personal stake in the outcome. Your obligation is to be clear about the course requirements (i.e., what the student needs to produce in order to pass the course and graduate), to assist the student, and to apprise the student of their progress relative to expectations. If a student wants to pursue work with you, and be largely self-driven in his direction and progress, you can give him "enough rope" to do this, but warn him if he is falling behind on the material that is necessary for the course. Intelligent but rebellious students are usually intelligent enough to knuckle down with non-preferred work when it becomes necessary to jump a formal hurdle, and if they fail to do this, the resulting failure is also a valuable learning experience.
Upvotes: 4 <issue_comment>username_5: At the end of the day, it is the student's project, rather than yours. It is their opportunity to show what they can do, so being overly prescriptive in what they are supposed to do each week is limiting their ability to demonstrate their capabilities. If a student seems able to direct their own study, then it is often worth letting them have more control. I tend to view myself more as a project advisor rather than project supervisor/manager. I think it is fine to give students advice and guidance, but it is their choice whether they follow it or not (especially if the marking criteria have been clearly communicated). However, the real problem is that students often are not good judges of their own capabilities. Unfortunately some people are only able to learn by error and trial, and they do need the opportunity to fail every now and again, so they can learn the lessons they need to learn, but with a safety net of an adviser that can drag them back to safety if they are heading for *complete* failure.
On the other hand, the student does need to show that they are capable of engaging properly and allowing the people they work with to work with them. That means that it is not O.K. for them to skip meetings, he does have to write the reports that are required of him, and his communications with you need to be polite and professional.
Upvotes: 3 <issue_comment>username_6: The fact that this student left mid-term for personal reasons suggests to me that he is under strain. I don't know the details, obviously; it could be depression, anxiety, insecurity, family troubles... But suffering of this nature can have a dramatic effect on focus, concentration, enjoyment and motivation, mental stamina, and other cognitive attributes. Perhaps the fact that he's back means he's recovered somewhat, but these issues generally do not disappear overnight, and you can expect a resurgence if he finds himself under stress again.
I wish more people understood that that the academic mindset — the way of thinking that we cultivate in ourselves and others — is unavoidably cold, clinical, ambition-oriented, and alienating. We tend to objectify everything, and process everything either as a teleological system to be worked through or an analytical problem to be dissected and resolved. That mindset is something separate from intelligence; it's more a matter of socialization than aptitude. Some students take to it like ducks to water, others (often the more sensitive, intuitive students) can find it brutal and hostile. The project you reference was a fairly typical move in the inculcation of that mindset. You gave him a topic, a structure, a set of short-term goals to meet, and a potential reward in the possibility of a publication, and then sent him off to meet them on his own: to 'prove his metal', as it were. He couldn't rise to the task (unfortunately), so you've chalked him off as 'rebellious' and are wondering publicly whether you should bother investing any more effort in him.
As I said: a cold, clinical, ambition-oriented, and alienating way of looking at it.
If you decide to put more effort into this student, you should recognize that what he needs at this stage of his career is to build confidence and a sense of intellectual security. If he feels like a task is strictly performative — something he feels he has to do merely because it's expected of him — he'll likely feel judged/evaluated and lose self-confidence. The fact that he's asking questions and doing reading outside the assigned work means that he's looking for a way to connect to the project on an *emotional* level — to make it meaningful — and that urge needs to be accommodated and encouraged as much as it needs to be reined into the task at hand. I'm not suggesting you should be his counselor or best friend, but he needs a bit more personal, human guidance and interaction than most students, at least until he internalizes the academic worldview.
Not every professor is inclined to do this. Some are too busy, some don't like that kind of personal interactions with their students, some are such consummate academics that they've lost touch with that supportive, personal, non-analytical way of being. No worries... If you decide not to work further with this student, the best course *for him* would be if you explained directly that you and he are not a good fit emotionally, and recommend some other professor in your department who has a knack for mentoring or supportive guidance. Maybe even set up an introduction; that would be a kindness. Ultimately this student is going to have to sink or swim on his own, sure. But making the water a little warmer in the short term might help him out.
Upvotes: 3 <issue_comment>username_7: It sounds like the issue may be his own personal discipline, not necessarily a bad attitude. Here's the most useful advice I ever got from a professor in college:
>
> You're bright. You're capable. You have great ideas. But *right now* you aren't the one who decides how it works. *I* would like to see you get to that point. I would like to *help* you get to that point, but here's what you need to understand:
>
>
> *You've gotta join the union before you can strike.*
>
>
>
Apparently it is a quote by someone famous-ish. I didn't recognize the quote. But I thought about it a lot: I had to "pay my dues," "earn my chops," or whatever other euphemism.
So, I did. I didn't need really need help having a good attitude, but I needed help seeing the value in conformity and submission. Sometimes it felt stifling--even oppressive. But I saw that I needed it and just did my best to get my wilder inward bits under control.
So, my advice has two parts:
1. If you're willing, welcome him back. Make it clear that you want to help him, but it's conditional. "*I want to help you succeed in this field. I'm older and wiser than you are about it-- **I've done it**. So you need to make a choice: either take me on as your academic advocate, which comes with doing what I say without quarrel, or go find a different advocate, if you can.*"
2. Resolve yourself to either choice.
* If he wants to work *under* you, take him on. Let him know he has to get his work done. Give him three strikes per semester or something (missing a third report and he's out of the group/class/program). Set the boundary and hold it. He'll be the better for it. *Don't give him a fourth chance.* If he makes it through, give him *two* strikes the next semster.
* If he isn't up for it, let it go. Don't think about him again. Be willing to give him a new opportunity if he asks again in a couple of semesters.
Upvotes: 2 <issue_comment>username_8: Although its a tricky area to explore, be alert to the possibility the student has undiagnosed issus and this isnt just "acting out".
What you describe is very close to a student friend of mine - very smart, but constantly asked unconnected questions, missed meetings or work or reading and made excuses, extremely disorganised, appeared to be a rebel.....
6 years later he was diagnosed with severe ADHD with appropriate meds, and now says if he'd been aware at the time,it would have made a hell of a difference.
So just be aware, not every acting out student is doing so because of a problem they can control.
Upvotes: 2
|
2021/08/12
| 833
| 3,469
|
<issue_start>username_0: I am a grad student at one of the University of California campuses. In my offer letter, it stated:
>
> Congratulations! On behalf of the Committee on Admissions and Awards
> and the Graduate Group in (Department name), I am pleased to offer you
> the following financial support for the upcoming 4 years. The funding
> will be in the form of a Graduate Student Researcher, Fellowship and
> Teaching Assistant...The
> details of your award are as follows:
>
>
> Summer Support with Professor X (3 months at $5000.00 per month)
> $15,000.00
>
>
>
I just got my first summer paycheck and the amount is only around $4000 (before tax). I sent some emails to grad coordinator, and some other parts of university but they didn't answer. I also sent an email to the professor and he answered "The pay check amount is true. Sorry, there is no more fund available for you at the moment!"
How is it possible? Can a professors behave like this? Why the offer letter of the university with the signature of the Department chairman below it, specifies some amount of money but the pay check is in different amount? How should I proceed?<issue_comment>username_1: I’m at the University of California and know a few things about how things work there. Your story is a bit too strange to be fully believable. The overwhelmingly most likely explanation is that you are not in fact getting paid less than what the offer letter said, but there is some miscommunication or misconception about what the offer letter and/or your payroll says. This may involve taxes, or some difficult to parse legalese or accounting language in the offer letter and/or the payroll printout.
A slightly less probable, but possible, explanation, is that there was a clerical error in the entry of your salary details into the system, that led to you in fact getting paid a different amount than you are supposed to. This actually happened to me when I started a postdoc (also at a UC school), and was easily corrected when I pointed it out.
The least likely explanation is that they are intentionally paying you less than they promised. In the UC system this would be essentially impossible to get away with, and would lead to quite severe consequences for anyone who was complicit, such as disciplinary action. Whatever the consequences to the people behind such a decision, I believe the university would not allow this to happen, and would move mountains to ensure that its commitment to you was honored.
Please talk to your department contacts to get this sorted out. The graduate program coordinator, graduate program chair, department vice chairs and chair, and department business manager would all be good places to start. Good luck! (And if it’s not too difficult, come back to update us on what happened… :-))
Upvotes: 4 <issue_comment>username_2: Before doing anything I would suggest to check with HR. They may simply made a mistake...
Upvotes: 1 <issue_comment>username_3: Your letter of offer is a binding contract between you and the University. They must pay you the money you were offered for the work described.
Pursue your options inside the University, ie, Department, Personnel, Payroll, Dean's Office, etc.
If you do not get a correction from the University you can pursue legal action. You will need to consult a lawyer for that. The cost of a lawyer should be small or nonexistent since they will ask the University to pay those fees.
Upvotes: 0
|