date
stringlengths 10
10
| nb_tokens
int64 60
629k
| text_size
int64 234
1.02M
| content
stringlengths 234
1.02M
|
|---|---|---|---|
2019/04/28
| 905
| 3,800
|
<issue_start>username_0: The [University of Sunshine Coast's website](https://libguides.usc.edu.au/openaccess/vanity) gives the difference as:
>
> Vanity publishers are publishers that will charge the author a fee for publishing a book ...
>
>
> While similar to vanity publishing, predatory Open Access publishers seek to take advantage of the Gold Open Access model, whereby the author pays to have an article available as Open Access on the journal website.
>
>
>
This sounds pretty trivial: vanity presses are presumably willing to publish anything, including journal articles, if paid.
If there is a difference, what is it? If there is no difference, why did <NAME> create a new term?<issue_comment>username_1: In principle they are both going in the same direction but the aim is slightly different:
* **Vanity publishers** are publishing (nearly) everything and the authors
usually have the aim of getting something out (in an academically
looking style) that they would otherwise not get out. e.g. homeopathy
or personal views that are not supported by data and or contradict
current knowledge. They serve the personal "vanity" of the authors. Sometimes the money might not be the primary focus of such a publisher (but often it is) but getting a certain view across is more important. In a department where I worked before one old (and close to forced retirement) professor used such a publisher to get out a (quite weird) book about how to integrate the existance of god with evolution. But yes, as statet in the comments below they can also be used for publishing harmless books/papers for family use e.g. a familiy tree.
* **Predatory publishers** aim to make as much money as possible form
academics by accepting everything and charging open access fees. This might contain legitimate science but a lot of it will not have the necessary standards. Instead of the author's vanity the money is the primary focus here.
Upvotes: 2 <issue_comment>username_2: I've always seen the terms as somewhat distinct.
A predatory journal seeks to deceive and create the impression of publishing a peer-reviewed article. I take the predatory journals to always be engaged in a conspiracy to defraud the academic process (where authors can range from stooges to willing co-conspirators). Here, the reason why authors pay *can be* imitation of Gold-standard open-access **or** it *can be* pay-to-play to defraud their employers.
In contrast, vanity presses are sometimes just really low-tier publishers without malice. I don't think they always are trying to create the impression that what they produce is high quality academic work. The failing from an academic perspective of a vanity press is that they don't have robust (or any?) standards for what they will publish in book form and they don't engage in academically sufficient (or any?) editing practices of what they publish. The product on offer is you can say "I published a book". I think the business model here is that vanity publications don't make publishers money, so they instead charge the authors to break even (or make a profit?).
Regarding your point about a vanity presses willingness to publish an article, sure they can print it up for you -- as a book, but I severely doubt they are going to make it seem like it's volume 4 of the international journal of basketweaving and wickerwork.
I'm sure it varies by field, but in my field as I understand it, journal articles in top-tier journals tell us far more about the quality of someone's work than even books in top-tier presses, because presses publish things of sufficient quality that make them money. As you go down the scale on each side, incentives and motives shift, but the article vs. book distinction remains meaningful.
Upvotes: 4 [selected_answer]
|
2019/04/29
| 1,162
| 4,871
|
<issue_start>username_0: I am a studying for a master of management. It is the first time that I will write a thesis.
I found the literature review difficult: I found several articles, but I do not know how I will do next.
Should I read the articles and highlight what seems interesting to me and then re-read them and start writing, or should I read each article and start writing in parallel?<issue_comment>username_1: General advice is below. In your question specifically, it sounds like you need to specify the particular area(s) of interest and specific research questions. Those will constrain what papers/sources to find and how far to look. The purpose is to connect what you've done with what was done before. This integration or synthesis depends on connecting the terms and methods of your work with the existing literature. So, the main task is to search and organise the previous work.
--
LITERATURE SEARCH
The purpose is a broad sweep to identify research that is relevant to a specific area or research question. When getting the scope of a literature, don't read closely, just take notes. Start by reading about the research question. If the area is self-esteem, read the article on Wikipedia to get oriented to the concepts and terms.
Also read about Boolean search logic, truncation, and wildcards. It’s especially importantly to use “” to form phrases. Notice how different your results are in Google when you search social memory vs. “social memory”.
If using a University library website, go there and choose the best database. PsycInfo is the central database for psychology. We also commonly use Google and Google Scholar. For medical papers, use PubMed. For economics, JSTOR. For education, ERIC.
Search! If you get too few results (usually <20), broaden your keywords. If you get too many (usually >200), use quotes, different terms, or more terms. The “right” number of results is a tricky issue to nail down. It depends on the research question. Communicate clearly with me about your search terms and database and what you’re finding and I’ll be able to direct you.
Good progress! Now search again using different keywords. For example, a project about randomness might include these different search terms, searched separately or together in various combinations, e.g.,: fate fatalism causation causal cause randomness meaning control “personal need for structure” “need for cognition".
Document your process. Include the database, search terms, and notes about the search process. Was it easy to answer the question? Of the citations you found, were there many more, or were you scraping the bottom?
When you find a good, relevant article, use Web of Science to check which articles have cited that article since it was published, and look for new material.
Document your findings. Using a Google document or spreadsheet, probably one that I’ve shared with you already, include the citation, abstract, and your summary of why you included this article, the central finding of the article, and any questions you have for me and our team about the citation.
Upvotes: 2 <issue_comment>username_2: As a professor in a French grande école de commerce, I am pretty familiar with the requirements of a literature review for a master's thesis.
I can recommend to you [an article I wrote on the subject](http://chitu.okoli.org/media/pro/research/pubs/Okoli2015CAIS.pdf). In summary, there are eight general steps:
1. Identify the purpose of your literature review. In your case, it is to understand what scholars have said related to the topic of your thesis.
2. Plan the review. Outline the steps that you plan to take to carry out the literature review.
3. Apply the practical screen. That is, clearly decide what kinds of topics you will search for and which you will not search for because they are beyond the scope of your focus.
4. Search for the literature. Try to have a meeting with you your school library to help you understand how to use the scholarly databases and get you started in searching for the articles.
5. Extract the data. Read the articles and use a table to record the most important elements from each article.
6. Appraise the quality of the studies. This step is probably not necessary for a master's thesis.
7. Synthesize studies. Do not just summarize the articles one by one; rather, identify and discuss the most important themes that run through multiple articles.
8. Write the review. Compose and put everything together.
The article linked above has details on each of these steps.
**Reference**
<NAME>, 2015. [A Guide to Conducting a Standalone Systematic Literature Review](http://chitu.okoli.org/media/pro/research/pubs/Okoli2015CAIS.pdf). Communications of the Association for Information Systems, 37(43), pp.879–910. Available at: <http://aisel.aisnet.org/cais/vol37/iss1/43>
Upvotes: 2
|
2019/04/29
| 893
| 3,671
|
<issue_start>username_0: A friend of mine started a PhD program after finishing his diploma with the best possible grade.
Unfortunately the institute does not have a lot of resources so he is being paid from a project fund for 3 years for a part time job.
He needs the full time to work on his PhD thesis, however, so works half the time for free.
This and his friends starting to work in the industry for considerably more money made him feel left behind, questioning his choices and that he is being taken advantage of.
The project that is funding him is capped at 3 years, so he doesn't think he could work part time to supplement his income.
Are there any ways, scholarships for example, for these kinds of situations, or does my friend simply need to reconsider his choices and quit his PhD to go into industry as well.
The field is aero-space engineering.<issue_comment>username_1: 1. Finish the PhD quickly. People with PhDs are paid more than people without PhDs. Becoming a PhD faster will get you to higher pay faster. You will also keep that pay for longer, because you will work more years of your life if you finish your education sooner. (note: correlation between education and pay does not always imply causation; also in certain disciplines the pay is terrible both with and without a PhD)
2. Quitting the PhD to go into industry is a good choice if making money in the short term is the goal.
3. Switch to a university which provides decent funding. This works best if you are early in your PhD.
4. Apply for outside fellowships. However, keep in mind none of these pay that great.
5. Working a job unrelated to the PhD while enrolled will make the PhD take longer, delaying the time when you are paid more. Do not do this if you can avoid it.
Upvotes: 3 <issue_comment>username_2: <NAME> had great ideas (especially 1).
I will add one tactical $$ idea: military reserve service (try for some quick officer program). It is a pretty nominal time commitment. The extra income will not replace a stipend but when added onto a stipend makes a big difference in disposable income. It also interfaces pretty well with grad school in terms of time and flexibility (one weekend a month and 2 week stint some time during the year, usually summer).
In the US (and most countries), schools/employers "have to let you do it", so there is less possible conflict versus moonlighting. It can also be a nice mind break (especially the 2 weeks) versus your school, lab group petty politics, etc. And probably some self esteem.
Upvotes: 0 <issue_comment>username_3: Your friend could find freelance work. Here's [an example](http://www.cactusglobal.com/careers/freelance#freelance). Cactus is a company that provides scholarly services (such as editing papers) and they promise clients that their editors have advanced degrees. Your friend will probably qualify. Warning: this kind of work isn't going to be trivial, and your friend is not going to get paid unless he puts in the work.
If your friend is questioning his life choices and envious of his peers who are earning more money in industry, quitting the PhD and joining them is the obvious option.
Upvotes: 1 <issue_comment>username_4: If your friend is in an industry oriented research lab, why not take summer internships? I paid my way through PhD by taking summer internships in industries related to my field, paid way more than academia (and had a host who tried to get me to question my academic life and join him on the dark side of industry...). Does the university offer teaching positions? This is a relatively easy way of making extra income while bolstering your academic CV.
Upvotes: 1
|
2019/04/29
| 1,710
| 7,190
|
<issue_start>username_0: I have been teaching for three years now. This year, for the first time, I caught multiple students in a class (out of 50) either handing in the task of a colleague or copying the majority of someones work.
Now I've started to question myself whether this could be on me in any way. Could I have caused this behaviour by something I said? Am I getting paranoid, or was this just a coincidence or the usual abnormal behaviour of some students?
For more context: All students in this class are going to be teachers. In the first lecture I mention some e-learning resources including the online library of all master's thesis of the faculty.
*Additional information to answer some comments (some comments are now in chat): This is a course at university level. In this University cheating is -- on paper -- taken as a serious offense. Meaning, if you get caught cheating it will be noted in your examination record and you have to repeat the whole class. Apparently, not all students know this.*
*I mention the online library in this course because we are discussing electronic materials in this course. One topic of the course is using "digital tools" for teaching. For me this includes using existing resources for preparing classes.*
*I checked my slides from the introduction. I did not have a "do not plagiarise note in them". Although, I believe that this should be not necessary.*
*I have to admit that I might invite them since I do not change the tasks between semesters. The reason for this is that I quite like them and that I have the feeling that the output of the tasks will be useful for them in the future.*<issue_comment>username_1: There is no excuse for plagiarism.
They certainly know that they cannot just copy someone else's work (also not in parts) - especially if they want to become teachers by themselves!
I like to give a few slides of the beginning of the term that make this very clear:
>
> If you copy then you will get a score of 0 for this assignment and I will watch
> all your following ones very closely (in big&fat red letters).
>
>
>
Upvotes: 3 <issue_comment>username_2: I suspect that in a group of 50 students there will be a few who want to cut corners. It isn't your fault, exactly, but there are some things you can do to make it less likely. If the number is small, you can deal with it individually in your office, of course.
But you should consider why people feel that cheating of any kind is a viable option for them.
If the tasks you set are all very high risk then people will sometimes act badly out of fear. This goes for both assignments and for exams. If you permit re-work to improve grading on assignments you lower the risk and improve the chances of proper behavior.
If the risk of cheating is extremely low, some will do it out of laziness. I once had a group whose experience previously was that no one actually looked at their work, so it didn't matter much what was turned in. I had to convince them (and the Dean) that I was willing to fail everyone if they kept up that behavior, and also convince them that I would look at and comment on their work. But if they don't get individual feedback, their work actually has little value to them for learning. With 50 it may be difficult to give this feedback, of course, though it is (IMO) essential.
Some students have gotten the idea that the reason that you set a task is to get the "proper" answer, rather than to help them learn. So their focus becomes getting that answer, even if no learning occurs. You need to find ways to educate them about the nature of education - especially as they will be teachers. It sometimes surprised a few of my students that I didn't ask them to do things because I needed the answers. I could provide my own answers. It was the production of the answer that was important, not the answer, and the answer could actually be wrong if I could use it to educate (re-work, feedback, ...)
There are other tricks that can be used to lessen the likelihood of plagiarism. Don't used old exercises if you find students turning in old solutions, for example. But permitting, even requiring, teamwork can, in many cases, lower the risk of plagiarism, as well as make education more of a social process. It also helps solve the 50 student problem if students work in groups, or even pairs.
In case of paired or group work you need a way to do peer evaluation, of course, but it need not be meaningless or threatening.
There are also strictly punitive measures, though I try to avoid them. Giving zero credit for all parties when plagiarism is encountered can be effective. So can expulsion for repeat offenders.
Upvotes: 6 <issue_comment>username_3: *Could* you have caused it? Most likely yes. One can certainly imagine something you did that made it more likely for your students to plagiarize; further proving that it is impossible for you to have caused it would be very hard.
*Should you feel responsible* though is a different question. I will say **no, you should not**. Your students are (presumably) adults, in which case they are responsible for their own actions. Even if you specifically told them that they should plagiarize, this [does not necessarily absolve them of responsibility](https://en.wikipedia.org/wiki/Superior_orders).
Having said that, it's easiest to sidestep the issue by telling students not to plagiarize in the first class. Mind you, this doesn't mean that [some students won't plagiarize anyway](https://www.cbsnews.com/news/cheating-in-the-heartland/) (some parents in this case even defended their children), but at least now they cannot plead ignorance.
Upvotes: 2 <issue_comment>username_4: Also consider whether you are perhaps 'forcing' them to do the tasks - e.g. by grading whether all the tasks are done instead of their skill (which admittedly is hard) when they don't have enough time on their hands. As a CS student I usually had more tasks to do per week than could be done in the time of a week, so having to prioritize was a given.
One way to reduce the stress was to make sure all the "you must hand this in" were handed in in the quickest way possible, so I had time remaining to actually study instead of doing exercises that would have been nice to have completed, but were not efficient regarding how much time they took.
Disclaimer: I did not plagiarize and I do not defend plagiarizing. This answer is simply focusing on one aspect of how you could possibly have caused or at least enhanced the likelihood of students cheating.
Upvotes: 2 <issue_comment>username_5: As a tertiary educator it is absolutely not your fault. All your students would have been taught not to plagiarise when they were in high school, perhaps even in primary or middle school. Your university or college would have a student handbook which would say that plagiarism is a serious academic offence.
I personally wouldn't think it is even necessary to say once in your lectures that your students are not to plagiarise. Of course it's not a problem to say that if you do want to. If you're not sure, you could consult with other staff or your head of school.
Upvotes: 2
|
2019/04/29
| 381
| 1,605
|
<issue_start>username_0: I have mailed to a professor and got the following reply in half an hour:
>
> Thank you very much for your interest in the position and your application. We will start evaluating applications today and will let you know the result of the first step of the evaluation process in due course.
>
>
>
Do I need to reply to this mail, if yes then what could be decent reply?<issue_comment>username_1: I doubt that any reply is expected. It seems to be a general response and may not have actually come from the professor, but from his/her office, instead.
I wouldn't expect much of any response until the deadline for application has passed. I suspect that viable candidates will then get further information about what else might be needed.
But if you haven't completed your application by submitting required materials, it would probably be good to do that soon.
Upvotes: 6 [selected_answer]<issue_comment>username_2: Do not reply
------------
That email can be categorized as a non-actionable notification email.
If you respond then you might paint yourself as a desperate, rude, or oblivious person.
* Desperate: You seek unnecessary affirmation
* Rude: You do not trust the "due course" which they mentioned
* Oblivious: You fail to understand that they are busy and would rather not receive inane follow-up emails
If you do not have a specific goal in your reply such as including crucial information which your application lacked then it's simply unneeded.
Upvotes: 5 <issue_comment>username_3: The letter clearly states "don't call us, we'll call you".
Upvotes: 0
|
2019/04/29
| 1,186
| 5,287
|
<issue_start>username_0: I submitted a paper for publication. The editor accepted it but asked that I remove some results to make the paper shorter. He suggested that I put the full paper with the full results in arXiv, and refer to it from the published paper.
My question: if I do this, can I later send the trimmed results to a different (lower level) journal? I.e, are they considered "published" since they appear in a preprint that is referred from a published article, or "unpublished" since they do not appear in the published article itself?
EDIT: in my field, in general, it is allowed to upload preprints and then submit them for journal publication, so the preprint itself is not an issue; the issue is that the preprint is referred to from a published paper, so it might be considered an "appendix" of that paper.<issue_comment>username_1: This depends a lot on what you put in the other paper, on your field, and on the journal. If the other journal permits pre-publication and your paper holds together for that journal's reviewers, you should be fine.
It isn't especially uncommon to mine several papers from the same research, covering different aspects.
But don't neglect that the full paper might be acceptable to another high level journal with less strict page limits.
Upvotes: 0 <issue_comment>username_2: I suppose you could. When submitting the paper with the "leftover" results, you'll probably want to explain in your cover letter what's going on, so that the editor doesn't think you're double-publishing those results.
However, I don't think it's a very good idea. People usually expect a one-to-one correspondence between preprints and published papers. Having results from a single preprint going in two separate papers will confuse people and make it harder for them to find and properly cite your work. So if you are going to have two papers, I think you should have two preprints.
So first, decide how you are going to split up the results among papers. Update the preprint of your current paper so that it matches the content (and title) that will be in its published version. Then submit a new preprint with a new title containing the leftover results, corresponding to your second paper.
In your current paper, you can then cite the second preprint.
Upvotes: 3 <issue_comment>username_3: Instead of publishing the "trimmed results" in a different journal, you may want to consider presenting the "trimmed results" in a conference, one that has published proceedings.
Upvotes: 0 <issue_comment>username_4: The important distinction here is that the results, while made public, were *not peer-reviewed*. From IEEE guidelines:
>
> (...) Authors should only submit original work that has neither appeared elsewhere for publication, **nor which is under review for another refereed publication**.
>
>
> (emphasis mine)
>
>
>
While they don't make it unambiguously clear, the second part of this sentence is a strong suggestion that *published* only applies to papers that have been published as a refereed (i.e. peer-reviewed) publication. Guidelines from other publishers are similar.
So, if, as others suggest, it is not possible to add an appendix or supplementary material to your accepted publication, as others suggest, I would suggest following the Editor's advice with one small (and not very tangible) change. **Publish the full results**, not as a pre-print, but **as a technical report** *under a different title*. Remove the old pre-print and have only a matching pre-print sharing the title of your accepted paper. Refer to your technical report as you would any other work, *including the URL* in the citation (as this is a "web-resource", and not peer-reviewed).
Since you may not resubmit the work only if it has been published as a refereed publication, which your technical report will not be, you will still be free to re-use those results when trying to publish in a refereed journal or conference. You should, in that case, *still cite your technical report*. This not only encourages consistency, but additionally, publishing all the results as a technical report also *"time-stamps your name"* on it, removing the possibility (or at least, giving you a well-documented trail) of your work or results being scooped.
This practice is something I've occasionally seen in my field. The last example I've seen was of a group that developed a new theory, and then also applied that theory to a number specific problems that were still of fairly wide interest to the community. The theory, with selected examples that validate it and demonstrate it were published in a high-ranking journal. The application of the theory to several specific problems was "published" only as a technical report (on their university/institute web pages) which was cited in the journal paper. It would also allow the authors to reuse any of those results (from the technical report) in their subsequent publications, e.g. they might want to more deeply explore the implications of, or process the, results of applying the technique to one of the specific problems covered in the technical report, in which case it would be perfectly fine to include those results (previously published in a technical report) in such a new paper.
Upvotes: 1
|
2019/04/29
| 2,140
| 9,636
|
<issue_start>username_0: I'm doing research for my PhD thesis based on a sensitive subject related to biomedical applications. In fact, our workflow is:
1. Build a computational model
2. Verify the developed computational model based on tests and data available in the literature
3. Apply this developed and verified model to some other data to measure an important parameter for clinicians that people's life will depend on that.
4. Investigate the result of this application on that data as well as its outcome and relevance for clinicians.
My problem is the fourth stage in this workflow. First of all, nobody ever did the fourth stage in this workflow for that particular application before. There are some similar models in the literature that tried to investigate the outcome of applying a similar model for that application but their conclusions are so general and vague where a definite conclusion cannot be drawn. When we apply that developed and verified model to that data, it produces some results which may look counter-intuitive at the first place, but there are a few papers in the literature that actually confirm similar observations. These results are not bad but kinda look like a negative result. We are confident in our results because this model is validated and verified based on several independent cases.
Unfortunately, in my PhD adviser's eyes, these results are worthless cause they are not desirable and he thinks nobody will buy this results if our conclusion is something counter-intuitive (well counter-intuitive based on his thoughts at least...). Every week in our group meetings, he reminds me that these results are worthless and I should change the developed model in a certain way. He doesn't give me direction regarding what way I should change the model, but it is important that we get intuitive results right now.
I'm feeling like he is forcing me to search for his desired results. It is possible for me to do that but I believe that's cheating or could be called hiding the truth. My question: Should I change my model to get his desirable results? if no, what's the proper way to convince him that this counter-intuitive results maybe is the truth and we should live with them?<issue_comment>username_1: I guess I'm going to have to support your advisor here. Your characterization seems a bit odd for a situation that wants to make clinical recommendations where people's health or safety may be endangered by bad advice.
You seem to want to imply that your model is a substitute for reality, and even is superior to reality since it has been "validated". But in reality, no model is perfect. No model perfectly captures reality. It is an abstraction from reality of course and makes some simplifying assumptions. Models are simple and reality is extremely messy.
A model that "seems" to predict reality is useful, of course, though it may not do a good job in some cases - edge cases in particular. But perhaps "his desirable" results, simply means "sufficiently matches reality".
The validation of your model is *not* proof that it gives good advice in all necessary situations. If you have *any* evidence that it sometimes fails, then it would be extremely unwise, even unethical, to apply it as is in clinical situations - at least without other evidence for the suggested regimen.
A model with little consequence for human health and safety can be somewhat flawed and still useful. But I'm worried about this situation. Perhaps all your advisor is saying is that you can, and should, do better. It seems to me, at least, to be wise advice.
Again, a model that shows *any* evidence of failure in some situations is suspect. If the application of the model is critical, then even minor evidence can be dangerous.
Upvotes: 0 <issue_comment>username_2: Any computational model which tries to model a disease scenario has flaws, since all models try to reduce an extremely complex problem to a simple one as username_1 pointed out in their answer.
Furthermore, your question makes me think that you are working in/with a computational/bioinformatics group. If the results that you present are counter-intuitive, I must side with your advisor on the statement that a study which presents counter-intuitive results will not be well received. Any counter-intuitive results derived via computational models will need to undergo vigorous hypothesis testing via experimental methods to be well accepted by the community.
If you still want to present such findings, you can
1. Avoid any mention of causal links.
2. You can present the results as a secondary finding while comparing your model to other such models described in literature.
3. You can also break up the bigger finding into smaller parts which may be well received by themselves, but not together (Present them independently).
Coming to the part about
>
> measure an important parameter for clinicians that people's life will depend on that
>
>
> Investigate the result of this application on that data as well as its
> outcome and relevance for clinicians.
>
>
>
Results from single academic studies are rarely used as a backdrop for larger clinical applications. Any academic finding however grand they may be, will undergo control analysis in multiple rounds of replication studies, and then it will be presented as part of a larger landmark review article. Results presented in such a context may end up reaching the desk of a clinician. Even then, they will think twice before applying those results towards their patients.
Although it is great to think about the ethical context of studies in basic research, I would strongly advice you to think about yourself in the grander schema academic research before you associate a high weightage towards such concerns.
Upvotes: 2 <issue_comment>username_3: **The meaning of unexpected results**
It is important to be skeptical about your computational approach. However, at the same time, **computational approaches are (almost) completely worthless if we just ignore them when results are unexpected** (unless you already have dominant evidence that the result is not only *unexpected* but also simply *wrong*). An exception would be if your approach is in the area of generative models, where a parsimonious model is suggestive of an underlying mechanism, which is not the area of your model: you are trying to do prediction for an unknown case (extrapolation).
The art is in determining whether your *initial model of the world* (i.e., expectation) is wrong or if your computational model is wrong.
In a long discussion in chat, I think we came to a conclusion that in your specific case, it may be that this is an issue of extrapolation to a condition where you do not have truly comparable training data.
**How to stop worrying and learn to love the model**
If you want to convince your advisor, colleagues, peer reviewers, or yourself that your model should be trusted, your next steps are to test the conditions that lead to your result.
Do all the appropriate tests for model convergence in the original training. Check for input parameters that are outside the range in the training set. Use graphical representations of your model to show how input are mapping to outputs. Remove or scale variables to test the sensitivity of your model to those changes. And additionally, as your advisor suggests, figure out what it takes to make your model fit the expected result. All of these approaches will help you find if something is wrong in the model or support you if something is wrong in the prior expectations.
Upvotes: 4 [selected_answer]<issue_comment>username_4: There are many good answers that discuss the topic from "What is the right thing to do in research?" standpoints. Let me give a bad one from practical standpoints.
>
> My question: Should I change my model to get his desirable results?
>
>
>
Yes, so you can finish your degree on time. Since you are talking about thesis, I assume you are in the very late stage of Ph.D. It's too risky to not finish it. If you were in an earlier stage, I'd recommend you to find another advisor quickly.
I am not in your field so I cannot judge. You might be the one that is right but it's irrelevant. From all stories I've heard and experienced myself, very rarely do I see an example of a graduate student successfully change advisor's mind. More often than not, these arguments go badly and things fall apart, the only one that gets hurt is the graduate student.
I was in a R1 university doing computational chemistry and faced a very similar but worse dilemma as the one you described. People in that field regularly publish overfitted computational model that has no other use except for fitting a few well known numbers from experimental data. I argued that these models cannot produce useful predictions and provided my own simulation evidence.
Then a few professors from that department including my advisor at that time decided to kick me out, called me "not suitable for doing a Ph.D." (OK a lot more detail here but that's not important for this answer if you are really curious search my other posts)
It was 5 years ago and now I am in the late stage of another Ph.D. program in a different field. People I know that used to do much worse than I did are mostly postdocs./professors/in senior positions in industry now.
Now, go back and change your model to get his desirable results so you don't have to follow my footsteps. If you feel this is wrong, just quit and do something else you are happy about. Yes the real world, even in academia, is this unfair.
Upvotes: 1
|
2019/04/29
| 1,169
| 4,807
|
<issue_start>username_0: Weeks ago I attended a lecture/seminar in NYU Shanghai, and I am not a student nor an alumnus of it. After the lecture I attempted to ask if I can have a copy of the slides because the content is so interesting. The lecturer said to me that I'd better email her later and she will send me it. And I searched Google for her name and found her email address and shot her an email(using my hotmail email address), but now I still have not recieved the slides.
I thought she doesn't know me and hence might don't know that I am not affiliated to NYU Shanghai. But it seems that she doesn't have any reason to share with me the slides.
When I was a graduate student I attended a lecture on open source and what the lecturer said about it is still whirling in my head: the slides that are not allowed to share are not valuable enough to be shared. I love such ideas, abeit sometimes too idealistic.
But when is it appropriate to ask for a copy of the slides and when it is not?
EDIT after one day of waiting and two additional emails:
Following the suggestion by @WolfgangBangerth I sent the lecturer one more email and she redirected me to the host of the seminar and then I cold-emailed the host with some explanations such as who I am, how I relate to the university and why the slides are important to me. Just now I recieved the slides from the host. Thanks.<issue_comment>username_1: In my opinion it is always ok to ask for something. One thing to keep in mind is the question whether the lecture was public. In Austria most lectures from universities are public so anyone can join and listen/participate (only lectures, no seminars).
A reason for you not getting the slides may be that the professor was just busy and forgot to reply to your mail.
Another point to keep in mind is: Do you need all slides? Is there something the interests you particularly and you know the number of the slides. Then I'd suggest to ask for this slide rather than the full set of slides. I've seen situations where lectures have just been copied by a lecturer. So some lecturers might be cautious to send their slides.
Upvotes: 2 <issue_comment>username_2: It is always appropriate to ask. But some people are unwilling to share their slides for a variety of reasons (none of which I think are particularly good, for basically the reasons you mention in your quote).
So you may or may not get the slides, but it is certainly ok to request them!
Upvotes: 5 [selected_answer]<issue_comment>username_3: The answer depends on what you mean by "OK". If by that you mean "not inappropriate", then sure. It isn't some kind of taboo or insult. But if you mean when is asking even a bit too much, well if they didn't respond apparently it was.
Every email request that requires the other person to perform some new task runs a high risk of crossing that line and not getting a response. I.e. requests where they have to go look something up, or dig out some document to attach, etc. Sometimes people mean to but then forget, but often they just think "nah" and move to the next email.
This is magnified greatly when the person receiving the request is in a position that gets many such requests every day. And moreso in a academic jobs where they are always fighting to get out from their todo list to make time for "real work" like writing and doing research.
Finally, if the person requesting the information has absolutely no relationship with the other person, the odds go down another order of magnitude. In such cases the person is basically a saint if they consistently respond to everyone. Yes I know such people too.
Conversely, I'd note that when you put someone on the spot in person and ask if they will do you some favor such as this, it makes them uncomfortable to refuse directly even if they want to. So they may give you an agreement they didn't mean. Indeed in some cultures it is supposedly impolite to decline a request directly, which is often seen as too blunt. So they will agree (or seem to agree) in a face-saving not-really-agreeing way that is hard for outsiders to read as a refusal.
Upvotes: 3 <issue_comment>username_4: Saying "e-mail me later and I'll send you the slides" may be a way of telling "no, I won't give you the slides" without starting a discussion as to why not. Same as saying "we will keep you informed" at the end of an interview to a candidate which didn't pass.
It's not impolite to ask, but unless you're entitled to the slides, there's no guarantee that you will get them.
Upvotes: 2 <issue_comment>username_5: Asking is fine; there's no reason for it not to be.
The top hypothesis that an academic doesn't get back to you about any particular thing must always be, "Too busy; not caught up on the last thousand emails."
Upvotes: 2
|
2019/04/29
| 561
| 2,340
|
<issue_start>username_0: I have been asked to attend a conference and present from my current employer since this conference is highly focused on my research point. In this conference, my ex-supervisor and his collaborator are going to be present and one will give a lecture. My current employer doesn't know about my story since they did not ask. I am afraid and confused on whether to avoid the places they are going to be in as they have tried to destroy me before and it was very harsh or face my fears and face them without speaking. I am also afraid that they would do badmouthing to my current employer.
What could be the best thing I can do, I am highly interested in to be in, but I am afraid of them and I don't know in case I refused attendance, what kind of accuses I can explain to my current employer ( the project I am working in is cooperation between different university, and each university is going to submit their students for this conference which focused only on my research field).<issue_comment>username_1: If you're staying in their field, you're going to run into them sooner or later.
Some suggestions.
Certainly do not approach either one. If they attempt to talk with you casually, you can either immediately excuse yourself by pleading previous business (meeting someone, etc.). If one attends your talk and tries to derail it or asks questions that seem inappropriate, shut them down, perhaps by one of the techniques listed in [this answer](https://academia.stackexchange.com/questions/16319/how-do-you-deal-with-trolling-students/16340#16340).
If you are not seeing a mental health professional, you may want to do so. The right one will be able to help process what has happened and give you tools for dealing with it, and with other situations that might happen in the future.
Upvotes: 3 <issue_comment>username_2: First of all, I’m sorry to hear that you’re going through this.
I would ignore them in the conference. I don’t know if this motivates you, but going to a conference, doing an amazing job and still shining in my field despite their sabotage attempts would be really satisfying to me.
On a professional level, if your job requires conference attendance and you categorically refuse for whatever reason, it will not help your career. Don’t let them sabotage you further!
Upvotes: 4
|
2019/04/30
| 340
| 1,524
|
<issue_start>username_0: I am currently taking a math class and saw that a professor from a different university had some ideas/open questions for final course research projects on his page. I began working on one of these project ideas but I ran into a problem that I couldn't overcome (basically a certain type of equation doesn't seem to exist for the object proposed in his question).
I asked my math professor about it but they weren't sure. Would it be inappropriate to email the professor from a different university about it and ask them if this is the correct direction to work in for the problem? It would just be a one time email, not like expecting him to answer all my questions, but I understand how he could take it as such.<issue_comment>username_1: I wouldn’t say it’s inappropriate (that mostly depends on the Professor) but I would not expect an answer. Professors teach classes at their home university and are not obligated to attend to students outside their class. If you found a mistake, or are contributing to the class in some way that’s another story.
Upvotes: 2 <issue_comment>username_2: I would say go ahead. Just make sure that you ask your question clearly: describe the approaches you tried and what you have stumbled upon. Professors are usually delighted to see the interest in their subject, and moreover, such questions from students help to improve their courses. The professor might not necessarily reply to your email due to lack of time, but you lose nothing by trying.
Upvotes: 3
|
2019/04/30
| 646
| 2,728
|
<issue_start>username_0: I am trying to choose between these different generalist repositories:
1. [Zenodo](https://zenodo.org/)
2. [Open Science Framework](https://osf.io/dashboard)
3. [Figshare](https://figshare.com/) (or Dyrad??)
4. [Harvard dataverse](https://dataverse.harvard.edu/)
5. General cloud services
6. ...
Figshare has a nice comparison of them (
<https://doi.org/10.6084/m9.figshare.751546.v1>), but I was looking for some guidelines to help me to choose the best one that I might have ignored. For instance, many European Projects require to use Zenodo so maybe is better to get acquainted with it, however, Open Science Framework seems to offer a better way to organize the projects but it seems there is no doi to the files uploaded. Is there any guideline or anything I should be aware of before choosing to put my dataset in a generalist repository?
Content I would like to share
-----------------------------
I would like to use a generalist repository because I would like to share all my outputs. This might consist of code, dataset but also figures, drawings and some times I share also some piece of hardware design (e.g. step files for 3D printing the adapters, sensor net, some PCB design).<issue_comment>username_1: I haven't used any of these resources myself but hopefully the following points can help you in your decision:
This kind of repository and the open science trend in general are pretty recent. This is why it's a bit difficult to evaluate and compare the different options, there's not enough hindsight yet. This is also why, in my opinion, the main risk is that the platform you choose would become obsolete or disappear in the future. So to be safe you might want to choose a platform which allows you to export your stuff easily.
Another important point you should consider is the policy of your institution: the outcomes produced as part of your academic job can be subject to IP restrictions. So it might be a good idea to check with your local IP or legal office what kind of license they allow/recommend and whether there is any IP disclosure process in place.
It's also worth mentioning that there are a few more generic options for sharing outputs: for instance Github, Google Drive, and of course the good old personal webpage (most universities let you create one under their website).
Upvotes: 1 <issue_comment>username_2: You can find repositories that assign DOIs to datasets in the [Repository Finder Tool](https://repositoryfinder.datacite.org/).
You can filters those that actually meet the criteria of the Enabling FAIR Data.
edit: Full disclosure, I work at DataCite as a developer, and participated in the development of the repository finder.
Upvotes: 0
|
2019/04/30
| 362
| 1,583
|
<issue_start>username_0: I am involved in an EU H2020 project in which we are developping some educational and support material for companies to support them in implementing energy effeciency measures. Factsheets will be made available in the form of Powerpoint or Word documents on a platform that companies who have signed up to the platform will have access to. The documents will be made available under a Creative Commons license, so the companies will be able to adapt the documents (Powerpoint, Word) to their specific needs if they need to (e.g. add their logo, simplify or reformulate some parts).
I would like to know how best to proceed to be able to use images (graphs, diagrams or other) from scientific publications in these factsheets. Can we simply use them in our documents by citing the reference, but without asking permission? Or should we ask permission to the author?, university? or publisher (e.g. Elsevier)?<issue_comment>username_1: I suggest, given how you want to use those materials, you should get permission - this will avoid any possible difficulties later.
They image owners may well say "that's fine, just a reference" etc but if they say "no" you can find an alternative, without other issues.
Upvotes: -1 <issue_comment>username_2: Depends on who's holding the copyright. Whoever has it is the one who gives permission.
Check the first page, in particular the imprint at the top. If it says (c) the authors, then the copyright is held by the authors; if it says (c) the publisher then you have to ask the publisher.
Upvotes: 1 [selected_answer]
|
2019/04/30
| 1,302
| 5,760
|
<issue_start>username_0: Suppose, in my research (mathematical) I have a critical step that I need to solve, and, due to various reasons I cannot find decent answer to that step by myself or by contacting friends or colleagues, so that I decide to post a similar but simpler problem to the relevant StackExchange (Math). Now, if someone comes up with a good answer that I can use to further my research, will it be enough to acknowledge the contribution of the author of the answer and citing that particular thread of StackExchange, or do I need to include that person as a coauthor of the paper that I might prepare based on that answer?
I emphasize here that the paper is supposed to be not all about that particular result, there are many other things, and that particular answer is useful, albeit critically, at an intermediate stage of proving some results in the paper. I personally feel that citing and acknowledging the contribution is respectful enough. However, I want to know what the standard norm is.<issue_comment>username_1: I have been in this precise situation. I wrote the manuscript to a reasonably finished state and then contacted the helpful SE person, asking if they want to be a coauthor and whether the attribution is sufficient. The attribution consisted of citing the MO answer and additionally naming them in the acknowledgements.
This approach has the benefit of maintaining or building a good relationship with the helpful person.
Remember that the typical norms of co-authorship include intellectual contribution, as well as writing or critical review of the manuscript, and in any case accepting the final paper. If you see their contribution as relatively minor, you might want to offer them the possibility to extend or otherwise improve the paper if they are interested in co-authorship.
If the contribution does not merit co-authorship, you can still cite it or acknowledge the (possibly anonymous) contributor on SE, if that is warranted. It is polite, but not required, to ask the person how they should be referred to, in any case.
Upvotes: 6 <issue_comment>username_2: I'm confused. You seem to be saying, actually, that your paper wouldn't exist if it wasn't for that contribution: "critical" twice.
If that is the case, then certainly they are "worthy" of co-authorship. If it is "critical" then you couldn't have done it without them.
If you suggest it to them they might agree or not. If they don't then citation would be appropriate.
---
Let me turn it around, just as a thought experiment. This other person produced a result to a question you posed. It was their work. Suppose they decide to publish that result. Suppose, indeed, that they think about it for a bit and decide to publish a generalization of that result. Are they justified in doing so? Are they justified in doing so as sole-author? Worth thinking about, I suppose. I doubt that they should publish as sole author without attribution, of course, but their situation seems to be a direct reflection of yours.
Upvotes: 2 <issue_comment>username_3: It is always a good idea to be generous with coauthorship. Consider the risk/reward:
Offering coauthorship has many benefits: good relations with the recipient, reputation of generosity, increased network of coauthors. The only drawback is that if the offer is accepted, then in some situations some people might have an incorrectly diminished view of your contributions to the joint paper. This is rare and minor -- people know that authors sometimes have differing contributions.
Failing to offer coauthorship has many risks: alienating the recipient, reputation of stinginess (which can lead to fewer future collaborators), accusations of academic misconduct.
Upvotes: 0 <issue_comment>username_4: There is the question of what you are ethically obligated to do, and what I would advise you to do. (I've had a somewhat similar situation.)
I do not think you are ethically obligated to offer coauthorship. The situation is similar to if the person posted a short note to the arxiv. They can expect to be cited, and they can expect that your paper will not take credit for their work, but they can't claim coauthorship of the next work to use the result. Once posted, answers should be free for all to cite, not attached to strings.
Now, if you don't coauthor, you MUST cite the result properly (stackexchange sites often have a link below answers for this) and attribute credit for the result in your paper. (I would include a full proof for completeness, though.) If you find that your paper still stands on its own when this result is reduced to citation of an outside source, this may be a sign that coauthorship would not be very necessary. If you find the paper is weak because all the hard parts have been reduced to citations, you still aren't ethically obliged to coauthor, you just have a weak paper.
If you do coauthor, then I would suggest still citing the stackexchange post, But it is, in my opinion, correct and ethical to present the work as an original contribution of the research paper, since SE is not archival and the author of the post is an author of your paper. In this case, you may find that the paper becomes stronger and its contribution more impressive, which is a sign that coauthorship is a smart move.
But aside from this, for many reasons, I agree with others that regardless, the best advice is probably to pursue coauthorship first. It is a nicer and more polite approach, it makes your paper stronger, it may help that person receive more recognition for their contributions, they are probably an expert in the area and can improve the whole paper and bring recognition, it may improve your personal network even if they say no, etc.
Upvotes: 4
|
2019/04/30
| 1,812
| 8,140
|
<issue_start>username_0: An undergraduate course that I teach has roughly 50 first-year students, and without trying I noticed several egregious examples of cheating, both on assignments and on a recent midterm exam. Given that the students have only very recently entered university -- they occasionally behave as if they were still in high school -- I'd like to have a discussion with them about academic integrity and cheating.
Has anyone else here had such a conversation with a class of theirs, and would be willing to share some advice? If the above-referenced incidents had been referred to a university committee on the matter, the students would almost certainly fail the course, or perhaps suspended. The department, however, gives the faculty some flexibility in deciding whether to refer particular cases to this committee.<issue_comment>username_1: The most important thing is that you need to convince them that you care about their learning and not about the answer you get to any question. Remind them that you can do any task you set them, so the purpose is for them to do the task and learn, not to produce a "correct" answer. Too many students don't understand that and are a bit surprised when you remind them.
Too few people, students and others, understand what it takes to learn something. Many student practices are counterproductive. To learn something takes a physical change in the brain to move the knowledge from short to long term memory. That change requires reinforcement - practice. It also requires feedback so that you don't "learn" the wrong things. Even athletics at a high level requires practice and feedback. It isn't any different for learning.
One way to try to get the conversation going, is to ask them their opinions about why you set them these tasks. The answers you get will probably include, or even be dominated by, grading. Convince them that you are a teacher, not a grader. That grading is only a necessary by-product of the process, not the goal.
But convince them, also, that you will take whatever means are necessary to reinforce that view, including holding them to a strict honesty standard. And convince them that you do notice when they cheat. It isn't hidden at all. Convince them that you are willing to fail everyone if they don't take the tasks and the rules seriously.
However, if your grading scheme is high-risk then they will still be likely to transgress. Too much grading based on too few measures is high risk. No opportunity to correct errors is high risk. Any scheme that induces panic in the students is high risk. People don't always behave rationally in high risk situations. Find ways to reduce student risk.
Upvotes: 2 <issue_comment>username_2: Please carefully read your school’s policies concerning academic dishonesty before you do anything. Follow those rules carefully so that the students won’t be able to get away with their cheating on appeal. For example, at my school you’re expected to talk directly with the student to give them a chance to defend themself before you decide on a punishment, and you’re required to submit an official report which allows the school to decide on any larger sanctions.
I’d also recommend having a third party (e.g. course coordinator, director of undergrad studies, or TA) present for the conversation. This makes it clear that it’s serious, makes me feel safer, and prevents the student from lying about the meeting.
Upvotes: 3 <issue_comment>username_3: **Talk with the "caught" students individually.** If you only give a talk to a class, the cheaters will rationalize their actions and conclude that you weren't addressing them. Instead, you can tell the cheaters to come to your office (you may want a witness, particularly if there is a gender difference between you and the student), lay out your evidence, and explain that they could be suspended over this. That should get their attention. Then, you can say that you will be addressing academic integrity in class and they should be sure to attend and ask you any questions, as there will be no leniency in future.
**Your university should have an option to report the incident without pressing charges.** This ensures that the students have a record so that they can't have a new "first offense" in every class. When I did this, the students received a formal letter of reprimand from the dean, which scares them but has no practical effect.
**Finally, you should address the class.** I recommend saying only that you are aware that some cheating has occurred and it is being dealt with. Then you can discuss academic integrity in the abstract (might have been good to do this earlier...).
By the way, I hope you are giving some penalty to the cheaters besides a talk. Suspending them from college might be too harsh, but a zero on the assignment is likely appropriate (and, at my institutions, is required by university policy...as others mentioned, you should be sure to double check these policies before proceeding).
Upvotes: 2 <issue_comment>username_4: In my place they have policies concerning cheating ("academic honesty code" or something like that) readily available online and they suggest that we include the reference to them in syllabi, which I find quite a reasonable idea for lower level courses. Usually it is enough to say that you would merely follow the rules to the letter if you catch somebody cheating (and to keep your word at least once) to scare the students enough.
If you want to be on the nice side, try to explain to them that the only persons cheaters can really cheat on a long run are themselves. It is not too hard if they take a course in some subject that later will become their bread and butter (like ordinary differential equations for engineers) and next to impossible if you teach something like business calculus for future administrators and politicians.
Explaining that learning is not just about memorizing the stuff that
will appear on the next test with the intent to forget it right after is a great idea, but, to be honest, I have never succeeded in such explanations. Maybe I'm doing it from a wrong end, but IMHO, one needs to slow down quite a bit on many undergraduate courses and to stretch them over twice longer time periods doing plenty of recitation sections to convince anyone that learning is what you really care about. Also one should think carefully
about what and how to grade because once the grade is involved, it immediately becomes the primary objective function to maximize, and you can then sing like a nightingale all day and night long about why grades are not the most important things without being listened to by anyone.
One more thing to keep in mind is that cheating is just a form of cooperation and you may try to encourage other, healthier forms of cooperation like study groups, class group work, etc. to reduce it.
If you want to hold a conversation about academic honesty, it makes sense to remind the students that cooperating and helping each other are excellent things in general, only they should happen not on the timed test in the class and not in the form of mindless copying other person's work.
At last, one has to accept that cheating on the courses a person is forced to take pretty much against his will has always taken place and will, probably, always take place. The hardest attendance, academic honesty, etc. enforcers in the former Soviet Union were the teachers of the history of the communist party and scientific communism and we cheated like crazy on both subjects. Our history of the communist party professor was quite a reasonable guy and just told us that we should know what he tells us well enough to pass because if we could not memorize it, we would, probably, have no chance to absorb mathematics either. That was a relatively fair game. I don't know if you really want to be that blatant but it is always worth keeping in mind that in reality you cannot offer any better explanation to some of your students as to why they should take your course, and this fact has some bearing at least on my own attitudes.
Upvotes: 2
|
2019/04/30
| 2,020
| 8,439
|
<issue_start>username_0: I started my PhD 7 months ago, and as I generate more and more data, and do more and more analysis, my folder structure is getting out of hand. I wanted to ask for best practices and opinions on how to organize my files in order to not lose track of everything and quickly find what I need. I am a geoscientist and have lots of analytical data as well as programming scripts. Along that I also have `README.md` for some analysis and snippets of ms word or plain text, if I wwrite something down to remember for a paper possibly.
I've made it a habit not to edit raw data at all and to backup regularly for obvious reasons.
Right now my simplified general structure looks a bit like this:
```
├── data
│ ├── analysis
│ │ ├── isotope_temperature_reconstruction
│ │ │ ├── report.ipynb
│ │ │ └── script523.py
│ │ └── light_micro_growth-line-analysis
│ │ ├── img1.svg
│ │ └── regression.py
│ └── raw
│ ├── isotopes
│ │ ├── run2019_02_19
│ │ └── run2019_02_24
│ ├── light_microscope
│ │ ├── sample_xy123123
│ │ └── sample_xy123124
│ └── sem
│ ├── sample_xy123123
│ └── sample_xy123124
└── documents
└── paper1
```
I know the system my files are organized in doesn't really matter as long as it is consistent. However I am facing some struggles:
* The data usually varies in "quality" and "ripeness". I have data that resulted from:
+ Some trivial test -> won't be used ever again
+ Is for calibration -> Doesn't belong to any project, but matters in many cases
+ Is directly and only needed for a certain publication
My problems with this are:
* I find myself often linking and copypasting my data all over the place, because it is not where it's currently needed. As a consequence i also make edits only in certain places and not everywhere, and lose track of whats the most recent file. I also lose track of what files are handled programatically and where I edited something "per hand".
* I have a lot of duplicate python/R/whatever scripts that I copypaste wherever needed. I think this is the easier part to resolve by modulating code and putting it into version controlled system wide libraries.
* I sometimes have snippets of word or plain text that contain relevant research insights, but are scattered all over the place, because they are not directly related to a paper / data.
So I am looking for suggestions to address these problems, as well as suggestions for general file and data management, and general organization at a researchers main desktop machine.
The only problem that I feel adequately solved is my literature management because I just use zotero and let it organuize all my papers in a coherent folder structure. (It also makes it easy to search via tags, which would be super cool for data files)<issue_comment>username_1: In general I would distinguish between two approaches or a combination of both before ending in a very time consuming categorization of all your files
* **top-down**
This means you use intelligent software to find again files you are looking for and don't worry too much about where it is stored. For example reference manager (Mendeley, Zotero,...), a Desktop search engine (Copernic). In the desktop search engine or with windows search (indexing turned on) you should see which file was changed at the latest. Most of your ideas and sketches you save in Onenote or something else that can link to files and www links.
* **bottom-up**
This means you come up your self with a distinct structure like you did with a rough categorization (raw data, devices, projects). But don't complexify this, when most of the files can be identified by the file type/ending, then it may be more convenient to order it by projects and keep all file types within such subfolders.
Additionally, one trick I use is **tagging files** in their filename like "#phdthesis" or "#interesting" or "#cite" or "#collaboration".
**Sample files** (data, images of samples) always get a date *ProjectnameDayMonthYear* independent of their file type.
Don't make your system too complex or too time consuming. For instance most of PDFs I read and download end in one folder. No point in categorizing them, this will not work/help over years and decades. Defining content-relative filenames takes too much time, most of them I find again over my desktop search engine months or years later if they were memorizable with keywords and search operators or self-created #tags.
**For files which the PI or teammembers have to have the possibility to retrace them** you anyway have to arrange a common plan how to structure and save the files everyone understands and can contribute. I would either suggest cloud storage or a local wiki here (moinmoinwiki for instance).
The most productive system is probably [the one "invented" by <NAME>](https://blog.stephenwolfram.com/2019/02/seeking-the-productive-life-some-details-of-my-personal-infrastructure/), but it took probably even more time to set it up than writing his article. But you get a lot of ideas, so I'm not sure it is future-proof to organize yourself with a propertiary software.
A personal opensource free wiki like moinmoinwiki or tiddlywiki is also an alternative and combination of bottom-up top-down and file system. So my experience is a wiki is only worth the additional time and effort at least with 2 or more team members, otherwise your personal choice and mixture of bottom-up and top-down is more time-efficient and humanities/experimental research/programming single reaserchers will end up with very different management system due to available time, number of files, work-flow and necessity to document and retrace everything.
File systems and desktop search engines will never die out, so my general tip is not to rely too much on fancy software like onenote, mathematica or alternatives to have a future-proof system.
Upvotes: 4 <issue_comment>username_2: One of the more important goals when organizing your scientific data is to make it reproducible. At some point someone will have to look at the data and analysis again and try to understand it. This will often be you, e.g. when writing papers or your thesis, but also other scientists later that continue to work and expand on your projects. If you create a figure and your PI or a reviewer asks how exactly it was created and which datasets its based on, you should be able to figure that out from your data.
This requirements means that copy and pasting is not necessarily a bad thing. You want to be able to reproduce your analysis with exactly the flawed scripts you used the first time, not the updated version that might fix bugs there or might have introduced new ones. But you also want to be able to update your analyses if you find a problem in one of your common scripts, and check if your results changed due to this.
If your data isn't very large, I'd simply duplicate it in every analysis folder. That way you have both the code and the data in the same place when you have to revisit it. If it is large, you should take care to create an immutable raw data storage that you link to, and make sure those links are never broken by renaming or removing data.
For your scripts, it seems like you are experiencing issues because you haven't centralized your common code. Creating a modular version of your common code like you suggested is certainly a good idea. But I would run this in a way that copies your common code *as it is at that time* to the current analysis folder. So that you can just rerun the analysis with the old code easily, but still have a way to also run it with updated common code, if desired.
If you have problems distinguishing where you have edited stuff by hand, or where you copied stuff, it seems like you're doing this in multiple directions. It's always easier to follow this if your changes only flow in one direction. It's also a good idea to put as much of the data and code as reasonable into a real version control system like git. So you can just use e.g. "git blame" to figure out if you edited some parts of a script manually.
For finding snippets of text, I'd probably just use some kind of full text search. Maybe use a naming convention for this kind of files, so that you can distinguish them from your data and code files, if you want to search in them.
Upvotes: 3
|
2019/05/01
| 634
| 2,652
|
<issue_start>username_0: I recently completed a mathematics BA and have started 4 year ROTC commitment in the US military. The military has offered to let me get my PhD in Operation Research(OR) while being on Active Duty(full pay and benefits) starting in August. I had planned on becoming an academic after the military and my research interests are in Theory of Computation and Logic. Originally my career plan was getting a PhD in Math(logic or theory of computation), CS(theory of computation) or even Philosophy(logic), but getting in a PhD program right now fully paid for is very tempting. I am interested in OR, but I will eventually focus on Logic or Theory of Computation. If I do not purse an OR graduate degree I will be doing mostly managerial functions for the next four years. I think getting the PhD in OR and doing technical work might be a more efficient use of my 20's, than doing a traditional military leadership. I have noticed some very well respect Professors do not have their PhD in the exact field they are currently doing research.
To my questions:
Would getting a PhD in operational research immediately preclude me from tenure track position in Math, CS and Philosophy Departments? Even if I have a solid amount of publications?
If so, would already having a PhD hurt me if I applied for admission to a PhD program in Math, CS or philosophy?<issue_comment>username_1: What type of research will you be doing in OR? Unfortunately, you will probably not be performing the type of research required in a CS PhD, for instance. Is there any reason why you applied for a PhD in OR instead of one of the topics you're interested in?
Upvotes: 1 <issue_comment>username_2: >
> Would getting a PhD in operational research immediately preclude me from tenure track position in Math, CS and Philosophy Departments? Even if I have a solid amount of publications?
>
>
>
It would not preclude you from applying, but your success would depend on whether your research profile matches the position as you would be competing with many candidates who might be a better match. The number of publications is important but it's not the most important criterion.
>
> If so, would already having a PhD hurt me if I applied for admission to a PhD program in Math, CS or philosophy?
>
>
>
I can't see any reason why already having a PhD in OR would decrease your chances to get admitted for another PhD program later. But will you still be motivated for that after getting your first PhD?
An alternative option would be to progressively switch topics after the PhD through one or two postdoc contracts.
Upvotes: 3 [selected_answer]
|
2019/05/01
| 351
| 1,552
|
<issue_start>username_0: In a conference registration form I found a slot to be filled having the header "Justification for travel support". I don't quite understand what they are asking precisely. What is the appropriate way to fill this in?
I remark that they ask for resume and research statement in different sections of the registration form for the purpose of attributing funding.<issue_comment>username_1: Do you need assistance from the conference organizers to pay for your travel to the conference and/or accommodation? If so, give your reasons here. If not, ignore the box.
Upvotes: 4 <issue_comment>username_2: With some conferences/courses, free or subsidised travel (and even lodging) is provided by the conference organisers and/or sponsors for a certain subset of participants. The eligibility criteria varies - and may include students (undergraduate or postgraduate) or participants from listed developing countries. What I quoted are just examples. For specifics, you should refer to the details stated on the conference website or printed brochure and see if you qualify for such assistance/subsidy (and then decide if you wish to avail yourself of it). If they didn't give these details on the website or publicity materials, it is appropriate to write to them for clarification.
Upvotes: 4 [selected_answer]<issue_comment>username_3: It should be something like:
* Presenting a paper at the conference.
* No institutional/employer travel funding.
* No third-party travel funding.
* No (significant) personal wealth.
Upvotes: 0
|
2019/05/01
| 887
| 3,864
|
<issue_start>username_0: I am a Computer Engineering major at a top 15 engineering school. My goal is to get into grad school for Machine Learning. I have set very lofty goals and am aiming for places like MIT, Berkeley, Stanford, CMU etc. I will be applying for safeties too but these top tier colleges remain the goal.
I have got a 3.8 in college till now. The issue is that I am messing up a lot in my electrical circuits analysis course. I am in the C to D range in the course. The professor is extremely harsh while grading and covers grad school material in an intro course. I am thinking of withdrawing from the course. I am confident that I can perform well next semester when I take it under different circumstances. Seeing as circuits aren't related to machine learning or computer science, I was wondering how bad would a W for this course look in my transcript, while having an otherwise great record.
I am also involved in research involving machine learning and will be doing so till the end of undergrad. It's this one thing which is worrying.<issue_comment>username_1: It is, of course, impossible to say how someone will look at it, but I doubt that very many people would think it odd or unusual. It is pretty easy for students to get in too deep with studies and the other things during undergraduate years. If you drop now and do well later few would think less of you, I predict.
But if you make a habit of it, then things might be different.
Since it is difficult to predict the time requirements for research, you have a ready-made explanation for getting in over your head for a term.
However, a poor grade, such as a D in a major course would be cause for concern. Not necessarily disqualifying but it needs explanation.
Upvotes: 3 <issue_comment>username_2: If you withdraw from a course during an otherwise "normal" semester, most people judging your transcript from the perspective of graduate school admissions will assume that you were not doing well in the course. If you later complete the course with a good grade (A or B), this won't matter much. It also won't matter much if the course content is not particularly important for your Ph.D. field. To be honest, it will not hurt your application terribly if you get one or two bad grades in courses that are not particularly important for your Ph.D. field as long as your overall GPA is not impacted too much.
What would be more troubling to me would be to see a student receiving a poor grade in a course even though they withdrew from the course on an earlier attempt or attempts, withdrawing from the same course multiple times, or withdrawing from multiple courses over your program.
Upvotes: 2 <issue_comment>username_3: I have some personal experience with this matter. Every semester during my undergrad, I enrolled in well beyond a full course load with math and computer science courses outside my major (biology). I was trying to break into bioinformatics from a state school without a bioinformatics program. I put classwork on the backburner to more thoroughly invest in my research and ultimately withdrew from a biochemistry course. I graduated summa cum laude with departmental honors for research and a 3.65 GPA. My goal was not as lofty as yours. l simply wanted to gain entrance to a PhD program in bioinformatics. I am currently a second year PhD student with a W on my undergraduate transcript. I think that demonstrating consistent commitment to research is of greater value than a pristine GPA and minimal introduction to graduate level reseach. I cannot say whether programs at more prestigious universities value one area of student success more than another or if success across all endeavors is expected. But there are graudate programs that accept students with Ws and research narrative that overshadows the minor transcript blemish
Upvotes: 3
|
2019/05/01
| 2,052
| 8,524
|
<issue_start>username_0: During my mechanical engineering PhD, I was not able to go to my field specific conferences and had to go to diverse conferences. Thus, I couldn't interact with researchers directly in my field of research. As an introvert, it is very difficult for me to make connections or networks during conferences and events. So, I doubt that I could have done anything by going to field specific conferences either.
On top of that, I don't have anything on my CV to speak about my leadership qualities. I have done some volunteering work and judged a poster session during my PhD. I also have done some TA. But, apart from that, I have not done any organizing stuff.
I had earlier helped my advisor to write a grant and have been a reviewer in a journal. Since writing that grant, I have not received any opportunity to write a grant as my advisor has pivoted his research interests to other field quite different from mine. So, I don't have much experience in that too.
Excluding my publication record, as a PhD student about to submit my thesis, how much of a disadvantage am I in when applying for a future academic position?<issue_comment>username_1: Leadership qualities matter if you are applying for certain prestigious fellowships, such as <NAME>. But I think outside of that its more like a job, where people hire you based on whether you can do the work, which in academia means having certain technical skills and evidence of productivity like papers.
However communication skills are very important, as you have to sell yourself by writing good statements of research, cover letters, CVs and interviews. You also need good references as well.
Upvotes: 3 <issue_comment>username_2: People skills can be very important for an academic. We teach, we advise and guide students, we interact with colleagues. In lab sciences we often have to work closely with others. Collaboration, even in such fields as math, is very important today. Moreover, people will tend to ignore your ideas if you don't learn how to present them enthusiastically.
However, that doesn't close you out if you are introverted. Even if you are very introverted. People *skills* are just that: skills. They can be learned and practiced. Many of the people you meet who have good people skills are actually introverted.
I've mentioned how introverts can interact effectively in public in answer to other questions here, [for example](https://academia.stackexchange.com/a/124602/75368).
Treat people skills just like any other skills that you need to learn. Start out slow, if necessary, but practice. Play the role of a confident somewhat extroverted person, even if you aren't.
---
This answer was provided for an earlier version of the question that was quite focused on effective strategies for introverts in academia.
Upvotes: 4 <issue_comment>username_3: By what you say in the comments, I assume you feel that you are an introvert, and are afraid that at 30 years, it is too late to change that. I can alleviate that fear for you.
I used to be an obvious introvert (certainly during school, but in my heart until now) and at some point kind of forgot about it. I'd say that I am interpreted as an extrovert these days, I certainly am in a job position where I talk a *lot* with people, and am perceived as somone people go to to get decisions made for them, as a coach, mentor, etc..
I'll admit freely that I stumbled into that situation... over and over again. A lot of coincidences; some life-changing experiences; some "life phases" here and there; and often being too timid to actively fight out of extroversy-inducing situations. At most points in times (including today), I could very well relate to introvert people, and if given the choice, I would dearly like to live a more introvert life.
So, keep your head up and do what you are doing. Yes, being social helps in all walks of life, including presumably academia. No, it's *never* too late to change, in my opinion. Or, to put it the other way round, people who cannot change cannot change irrespective of age. No, I cannot tell you how to do it, as I don't know you, but one thing that certainly did it for me was not to run away from challenges, and (even if it's not political correct to say that) throw yourself into cold water once in a while.
Seems like finishing your PhD and looking for jobs is exactly that. Pondering about how much you are at a disadvantage seems mightily distracting to me. Just do what you do best (I assume you have no trouble getting good marks and finishing your thesis...) and good things will come to you - you can get better at being social by practice.
Upvotes: 2 <issue_comment>username_4: I will answer your question as stated at the moment:
>
> how much of a disadvantage am I in when applying for a future academic position?
>
>
>
If (i) you have the other bits of the CV, such as publications, to backup your ability to do research, and (ii) your field is not so small that everyone knows everyone (including PhD students), getting a Postdoctoral position won't be an issue just because you skipped conferences. I know very few people that got a Postdoc after giving a killer talk at a meeting - no one in fact (life science). It is nice to go to conferences to showcase your work and put yourself out there (and be updated on the latest research), but in my experience your superivor's contact list and reference, followed by publications, will be much more important factors at this stage.
With that out the way, if you want to stay in academia, going to a conferences / meetings will be important at the next stage when looking for a group leader position. Then you should have a body of work to show off and interaction with the people that will hire you will be important - you want to be on their radar. I don't think you will need to go to *all* conferences, so as introvert, you can choose small focused meetings, preferably with a few participants you already know, to get you started.
Upvotes: 2 <issue_comment>username_5: I find some people are assuming introvert/extrovert are the only binary at play, when I think it's that's just the X axis, and there's a Y axis of Backstage/Performer that's also relevant.
```
Performer
|
|
Introvert ------------- Extrovert
|
|
Backstage
```
I'm an introvert (I need lots of alone time to recharge, I need to psych myself up for social things), but I love teaching. I like having a role, I like the attention. I also like how oddly temporary it is: the semester or panel is over, and next time it happens, I can try things differently.
My husband is an extrovert (has a few different gaming groups, always helping friends, just in general more social and that never drains him), but he doesn't want to be the center of attention. His job life is focused on helping others excel, and while he's had to learn to be more visible and give presentations -- that's what freaked him out more than creating something permanent.
Some actors of of course are known for the performance part, and they want to party every night (extrovert, performer); sometimes the production designer who keeps things on schedule and is great with work collaboration might also need time in slow motion, at home.
I came up with this "theory" when I was trying for an MLS -- lots of people assume librarianship is all about introverted cataloging, so there were leadership classes trying to encourage people to "embrace extroversion" -- and I got in discussions about how that may be too big a shift, but it could be viewed as "roles" with specific skills. People thought I wasn't an introvert because I was outspoken in class -- that was me being "On." I have depression (why the MLS was dropped), and it took me so much energy to have Teaching performance AND Grad School performance. Introverts may just need to save up our "social points", whether for performing or collaborating, and spawn them slowly.
(And there are a LOT of reasons for not going to conferences - especially if $/family are issues. It may matter for prestigious colleges, of course. But you may find you LOVE teaching -- lots of shy people can be great at that. Or you may be an OK teacher, and be a great backstage person, doing committee work and delivering the research. There is a need for a LOT of different neurotypes in academia. Just trust your strengths and all that. )
Upvotes: 1
|
2019/05/02
| 1,418
| 5,999
|
<issue_start>username_0: I'm in the third year of my physics PhD, and although I have done very well in courses and passed the written candidacy exam, I have already switched research advisors twice and all three times I couldn't make myself productive in research. I'm planning on doing a Masters now but even that is taking some effort, and at this point I just want to leave.
I truly enjoy learning about physics, taking courses, solving problems, reading (good) papers, and the initial stages of learning the jargon and ideas of a sub-field, but after that when it comes down to business and I have to focus on the real research I'm supposed to be doing, I feel like all joy and curiosity I have towards the subject is sucked out of me.
To me physics is a very personal subject. It's an intellectual detour from work. It's like therapy to me. Give me paper, a pencil, and one good textbook introducing some ideas to play with and I'm like a kid on a playground.
Trying to make it into a career, however, means I'm not free to truly explore and examine topics. Any organic thought process or line of questioning I have is immediately crushed under the weight of the research I'm supposed to be doing and the judgement of my peers/advisor as to whether or not I'm just wasting time.
I went into a PhD solely because I really loved the physics courses I took in engineering, not because I had a career plan. Meanwhile most of my peers and professors treat courses like a nuisance that must be overcome to get credit and focus on the work that they really want to do.
I'm at a bit of a loss of what do for work now. My goal for an ideal living situation would be a modest but livable income at a relatively simple minded job that doesn't follow you home so I could spend some leisure hours studying and playing with ideas. However, it seems like most jobs that use my only assets of engineering and physics knowledge demand too much active attention to allow one put serious effort towards other pursuits simultaneously. Can anyone advise a good career path for this situation?<issue_comment>username_1: **Consider teaching as a career**. You don't have to do research then, but you do have to the understand the material. If you like learning about the material then this would suit you well; you might even be able to convey your love for the topic to future students.
Potential problem: chances you'll need an advanced degree to be able to teach at university level. With just a Bachelor's degree you can still teach at high school level and below.
**Alternatively, consider publishing/science journalism**. These two fields don't need you to do research too, but your output in both improve if you understand the material.
Potential problem: it's possible the physicists you'll be speaking to consider publishers [a scam](https://academia.stackexchange.com/questions/109003/why-are-academics-not-paid-royalties-on-published-research-papers-in-ieee-acm-e), and you by working in one are complicit in the scam. You'll have to decide for yourself if you can put up with that. Science journalism is relatively benign in this sense, but you'll be forced into confronting how something you find so wonderful, so interesting, and so amazing can be considered by others to be boring and pointless.
Upvotes: 5 [selected_answer]<issue_comment>username_2: Ignore the "must" and enjoy doing physics.
The worst that can happen is that you are going to enjoy the next couple of years, and then leave to take a decent position in the industry.
The focus on deliverables can be curiosity-crushing. Purely as a career, academic research is a poor choice: long hours, lots of competition, little pay compared to qualification. The real reward is figuring out how the world works; don't forget that.
So, play with physics, enjoy; most likely nothing of public value comes out. However, if you do figure something out, do take the bother to write it up, maybe it is not that bad?!
An advisor's perspective: I want to add that the above is not the most common advice that doctoral advisors (myself included) give to their students. It is because the advisor does not get to share the fun, but has to deal with any negative consequences. Yet many of the somber professors that surround you did the same in their youth.
Upvotes: 2 <issue_comment>username_3: Physics is a particularly special field in that it is extremely-old and there are many very old research niches that still continue. At the same time it can be super-broad, with physicists also doing research in very different fields that may be only analogous to physics problems, like network science and neuroscience. My guess is you simply haven't found a topic you like yet. Every area in physics was probably great fun to think about when it was brand new, but by this point research questions in many areas may be down to extremely-small increments that require a ton of work to get to in some sense (else they'd have been resolved already). I'd suggest seeking newer areas to work in.
Others have mentioned teaching as a career. However, in some ways teaching is farther from the enjoyment of taking courses than research is. In my opinion anyway. The goal of teaching is not to learn or to explore topics you like (we all push for those aspects as much as we can, but it's the tail trying to wag a dog). The goal is to get ideas that you understand thoroughly, through to someone less "smart" than yourself. And do figure out how to do it with some small fraction of the amount of time you'd like to use to address this challenge. Further, at the college level, the research schools, who's concern is overwhelmingly in hiring top researchers, not good teachers (ironically), have the best students. A more realistic job for someone who doesn't do research will be at a lower-ranked school with a mix of scholarship winners but also lots of poorly-prepared/poorly-motivated students who can be far less fun to deal with.
Upvotes: 1
|
2019/05/02
| 863
| 3,753
|
<issue_start>username_0: This is the situation. I am a postdoc at a German university. Unfortunately, my boss is not interested in publishing the research results. He prefers to publish just when it is strictly necessary for project reports and publishes old and unimportant results in most cases.
Obviously, it is a problem for my academic career, and I want to leave this division but for personal reasons can't do it in the nearest future. Just to mention, I do not work on industrial projects, and I want to publish the results of the projects I work on. It is also worth noting that the results in our area get "old" quite fast, i.e., in a year the community won't be interested in the results as much as it could be now.
The boss also believes that everything that is developed by people working in the division he is the head of is the property of the division = his own ideas (even though I have come up with all of my paper ideas and have executed the research and papers alone) -- and he requires that everything that comes out from the division to the public should be personally checked and approved by him.
Other colleagues seem to be OK with the situation, but I would like to pursue an academic career and get published.
My question is: if in my free time I study some things on my own that are not included in any project I work on, can I publish the results being a single author? Or, e.g., can I publish a review on the topic on my own? I don't have problems with mentioning the university as my affiliation.
Was someone here in this situation? If so, what have you done about it?<issue_comment>username_1: My best guess is that you are well and truly stuck. There may even be contractual considerations that prevent your desired action. Some places (though mostly in industry) claim that anything you produce, even on your "own time" belongs to the institution. You can look to see what binds you legally.
I also don't know your field and won't judge whether a paper without your supervisor would be looked at a strange or normal - especially if he objects. That expectation varies widely by field.
But my advice would be to do what ever you can to find a different, and better, situation. Run, don't walk, to the nearest exit. If this means bowing and scraping to the chief to get a good letter of recommendation, then go along for now.
I don't know how hard it will be to move with few/no publications. But it is usually pretty hard to go without good letters. If you can use him to get a better position you will have a chance for a career.
Upvotes: 2 <issue_comment>username_2: I'm going to answer based on my experiences as an academic in the US which may not cover your legal situation, so please be cautious.
At my university and I believe at most universities in the US, your publications are your own and not the property of your affiliated university. For example when you transfer copyright to a journal, you alone can sign off without any input from a university official or lawyer.
If I want to write an article and submit it, I can. The same is true for our post-doctoral researchers. I can write articles alone, or with colleagues at my or another university, or with those in industry, or with those in other countries. If co-authors contribute to the work they must be listed as authors. If they do not, they need not be listed.
So while you may not be able to publish the joint work you are conducting with your advisor/supervisor, usually there is nothing that prevents you from publishing your own ideas separately. The affiliation you list with your manuscript submission is mainly a way for readers to get in touch with you and have nothing to do with any "rights" to the publication.
Upvotes: 1
|
2019/05/02
| 1,461
| 6,401
|
<issue_start>username_0: I am in a country where English is not the native language. I have some proficiency in that language but none of the team members know this since I have always communicated with them in English.
Recently, I have been struggling with certain tasks. During a lunch, my advisor was talking about my progress to his colleagues in their native language, and how I was unable to get some basic facts correct. What he said is technically true but I feel insecure to be exposed like that and humiliated that he is willing to talk about me behind my back.
The idea of responding in their native language came across my mind, but confrontation is too scary to me.<issue_comment>username_1: Without knowing more about what was actually said, or the tone of it, I would dismiss it as your advisor concerned about how best to advise you. I think these sorts of conversations are relatively common when a prof, especially an inexperienced one, has a problem helping a student that s/he can't resolve.
Unless you think they were joking about you or implying you weren't adequate to the task, you should probably just forget it. But discussing your troubles with colleagues can be a valid way for the professor to actually find the best way to be of assistance.
There are worse interpretations, of course, but don't assume the worst unless it is really necessary.
I've had students, actually, who missed some essential things in their early education. They can be hard to advise since there isn't time for them to go back and fill in those gaps. It is an important problem.
Upvotes: 5 <issue_comment>username_2: Before trying to determine the best course of action, see if you can identify a goal. It's difficult to see what this would be from outside the situation (as is evident by the other answers and comments) -- that is, hard to tell whether the professor is sincerely seeking guidance, or having a laugh at you with colleagues, or somewhere in between...or something else entirely.
I imagine the goal would be to coax your advisor into giving you better advice or guidance, and perhaps communicating more openly with you about his concerns and his goals for you. If that's the case, I'm not sure that embarrassing him by surprising him in the moment with the information that you can understand him will help.
You would probably do better to cultivate him as an ally. This might mean, for instance, telling him *in private* that you can understand him and overheard some of what he said, and ask him to comment. By doing this in private you show him some respect and give him an opportunity to redirect his approach into something more respectful toward you.
Upvotes: 1 <issue_comment>username_3: Since you always communicate with your colleagues in English, I'm assuming that *"some proficiency"* in their language is simply not enough to carry daily life in this language. Having experienced that, I'd be very careful with assumptions, just as others have said, the tone and soem specifics of how he was discussing it can make a lot of difference.
Responding to people in a language they assume you don't speak is also something to be careful of. In principle, people will find ways of talking behind your back if they mean to. You should prefer to have access to these discussions rather than not to have it, but should should be mature to deal with them. That being said, you should occasionally demonstrate that you know a few bits of the language, as pretending to understand absolutely nothing might be seen as a lie. Don't enter a discussion in a foreign language if you won't be able to hold your ground through the whole discussion (which might become a heated one).
Furthermore, if your advisor notices that you have gaps in your education that you need to fill, it is part of his role to communicate this to you, and either provide references your could study on your own or (if these gaps are not that relevant) he should make peace with you having small difficulties here and there.
That being said, think to yourself if this might actually be a matter of "repertory questions", i.e. I can always pick obscure details about a subject you think you are good at and start asking you questions with no previous warning. For people inexperienced in the field, our conversation will make you look like you are not knowledgeable in a subject you actually know a lot about. People sometimes do this deliberately, but also unintentionally, if the asker believes the details to be relevant. The point is, talk to your advisor on whether he thinks those matters he was discussing were relevant or not.
Upvotes: 1 <issue_comment>username_4: I agree with username_1 that it'd be best to assume your advisor was simply seeking advice from a colleague about dealing with your gaps in education. If I were you, I'd go talk to him in private in his office, and ask him how you could overcome those gaps, what advice he could give you of where to start. Then his perception of you could be more out in the open, and he could have a chance to make up any misjudgement by being concretely helpful, while allowing you to demonstrate a willingness to improve. This would set you off on a better advisor/advisee relationship I think.
It would also have the helpful side effect of reminding him that most people start by understanding a language way, way before they're able to express themselves adequately in it.
Upvotes: 0 <issue_comment>username_5: As a second-generation immigrant whose facility in my family's native language is very limited, I can relate to this frustration. In those circumstances, it is very easy to try and latch onto what little you understand and make assumptions. Can you be sure that they are talking about you, and that they are really as derogatory as you think they are being?
To be honest, **the best solution is to become fluent in the native language** (obviously, this will not happen overnight, but, with practice, it should be possible, assuming you live in the country concerned). Unless you are on a very-short-term contract or only on campus occasionally (e.g.: because the institution is a 2nd/3rd/4th/nth affiliation with very limited duties), you really should make an effort to become fluent in the official language of the country where you are working. It is not right to expect everybody else to speak your language.
Upvotes: 1
|
2019/05/02
| 2,035
| 8,200
|
<issue_start>username_0: As a native English speaker studying in the Netherlands, I often find myself writing (not published) English papers for a Dutch audience, and I worry I'm alienating my superiors with my writing.
I put a sample of text from a letter I wrote for an admissions committee through an array of readability tests, with Flesch-Kincaid Grade Level, SMOG Index, Automated Readability Index, Gunning Fog, and Linsear Write all assigning it "college graduate" level. For comparison, the King James Bible averages around a fifth grade reading level, and New York Times articles typically produce a reading level around the tenth grade on the same tests.
At first glance, this is exactly how it should be. A university student submitting university documents should be writing at a university level. And yet, despite the truly incredible level of skill widely demonstrated by the Dutch people in the English language, I can't help feeling that I'm disadvantaging myself through use of constructions and vocabulary that no reasonable non-native English speaker could ever be expected to know.
Is there some merit to this? Rather than optimizing my writing for descriptiveness and articulacy, should I instead aim to be more readable by a foreign audience, at the cost of expressiveness?<issue_comment>username_1: I think what it comes down to is this: Why do you write and who do you write for? If your a novelist you have a different target audience than if you're a technical writer. If you're a novelist writing romance novels you have a different audience than if you're shooting for a Nobel Prize in Literature. Likewise, if you're a technical writer, your style should similarly be different depending on who your audience is. In all of this, I don't think it's about "dumbing down" your writing as you suggest in the title of your question, but it's a careful consideration of what you are trying to do: namely, to *communicate* something to a target audience. In your case, it's likely to communicate *knowledge*, not your intellectual prowess.
So do an assessment: Who do you write for? What do they want to get out of reading what you write? What do you want them to get out of it? And then assess what the answers to these questions mean for *how* you should write.
This may feel sad: If you have a large vocabulary and are fond of complicated grammatical constructions, you may not be able to use those in your technical writing. (Though, of course, you may get to do that when writing to friends or for other outlets!) But using simple(r) language does not make you a worse writer: Rather, if you manage to adjust your writing style to your target audience, then that's exactly what makes you a *good* writer!
Upvotes: 4 <issue_comment>username_2: My advice would be to be more direct. Many business and academic documents benefit from more meaty, direct, gutty writing.
You might even improve your own style, for English readers, if you change your attitude. Read the following advice:
<https://ocw.mit.edu/courses/media-arts-and-sciences/mas-111-introduction-to-doing-research-in-media-arts-and-sciences-spring-2011/readings/MITMAS_111S11_read_ses5.pdf>
In particular see the comments on page 5 about "English teacher beaming at you" and "emphasizing clarity and easy readability". Some of your comments in your question ('dumbing down', 'university students write university level') seem to me to show that you are too in love with showing off. Real good writing is much more about good ideas and good structure and clarity than it is about fanciness.
Upvotes: 5 <issue_comment>username_3: Thou shalt not dumb down thy writing, but don't make it a vain exercise of style
================================================================================
I'm a non-native English speaker, and let me put it straight: I may write in simple English, because limited are my English writing skills, but I don't want to read simple English because I want to enrich my vocabulary and grammatical constructions.
But whether you write for a native English speaker or not, write clearly, avoiding unnecessary verbosity just to show off your eloquence.
(And, honestly, stop wasting time using those readability tests)
Upvotes: 7 <issue_comment>username_4: As hinted at by other comments and answers, the very wording of the premise suggests attitudinal problems...
Clear communication is always the goal. Scholarly writing is not necessarily suppose to be "purely decorative" (that is, non-functional) ... of course depending on one's assumptions about the larger goals.
If a thing is simple, its explanation should be simple. If your claim is that any competent professional should be able to understand it, the writing should accomplish that.
Upvotes: 4 <issue_comment>username_5: Food for thought, to help you decide for yourself.
Style and substance go hand in hand. In a creative piece, style is more important and calls for that certain flair vocabulary helps us to achieve. On the other hand in a scientific journal, clarity is key. Where does your writing lie in that spectrum?
The point of communicating is to connect with your audience. Don't look at it as dumbing down your language; look to it as using 'appropriate' language.
The intention behind the writing is pretty important too. For example, if you are writing a literature thesis for university it would be reasonable to expect your reader to have an advanced grasp of the language.
Finally, this doesn't need to be viewed as an either or, there may be ways to communicate complexity through simplicity.
My personal opinion: complex vocabulary is overrated if the message can be achieved with simplicity- including in creative pieces. As the Bard once said: Brevity is the soul of wit.
Upvotes: 1 <issue_comment>username_6: My experience of academic writing, teaching on English for Academic Purposes courses and, lastly, seeing the results of native-speaker academics trying to 'dumb down' their language, suggests that most native speakers have an incredibly poor understanding of what makes a piece of writing difficult for a non-native speaker or reader to understand.
Your peers are used to reading papers in English and, no doubt, have read more succinctly put, more elegant, more descriptively adept pieces than you are going to produce—even though yours will be succinctly put, elegant and descriptively informative.
If you aim for the highest possible standard of academic writing, whatever that may be for your field, then your writing is more likely to coincide with the style, register, tone and range of vocabulary and grammatical constructions that your peers are already familiar with. There is no reason to depart from this. Indeed if you do, you are likely to cause your readers problems. And, of course, you are more likely to distract yourself from the task of expressing ideas in the most natural and effective way you can.
Upvotes: 3 <issue_comment>username_7: **Impress, but don't show-off**
Your audience wants to be impressed, but they don't want their time wasted (this is true whether you're writing to an admissions committee, a technical journal, or a picture book aimed at 6-year-olds). I've written and suffered for (and reviewed and condemned for) trying to sound important by using words where I needn't or expressing ideas in the most grand way possible. Oddly, after reviewing/writing somewhat for academia, it becomes second nature to do so, despite it not being the better choice.
The difference between well-written and showing-off is frankly reflected in a ditty by Dr. Seuss:
>
> It has often been said
>
> there’s so much to be read,
>
> you never can cram
>
> all those words in your head.
>
>
> So the writer who breeds
>
> more words than he needs
>
> is making a chore
>
> for the reader who reads.
>
>
>
> That's why my belief is
>
> the briefer the brief is,
>
> the greater the sigh
>
> of the reader's relief is.
>
>
> And that's why your books
>
> have such power and strength.
>
> You publish with shorth!
>
> (Shorth is better than length.)
>
>
>
Keep it clear. Keep it tight.
Upvotes: 1
|
2019/05/02
| 1,792
| 7,140
|
<issue_start>username_0: Citation systems often specify writing down the location (e.g. London) that a book/paper was published, how important is this, and if it is important, then why is writing down the publisher/journal name not enough?<issue_comment>username_1: I think what it comes down to is this: Why do you write and who do you write for? If your a novelist you have a different target audience than if you're a technical writer. If you're a novelist writing romance novels you have a different audience than if you're shooting for a Nobel Prize in Literature. Likewise, if you're a technical writer, your style should similarly be different depending on who your audience is. In all of this, I don't think it's about "dumbing down" your writing as you suggest in the title of your question, but it's a careful consideration of what you are trying to do: namely, to *communicate* something to a target audience. In your case, it's likely to communicate *knowledge*, not your intellectual prowess.
So do an assessment: Who do you write for? What do they want to get out of reading what you write? What do you want them to get out of it? And then assess what the answers to these questions mean for *how* you should write.
This may feel sad: If you have a large vocabulary and are fond of complicated grammatical constructions, you may not be able to use those in your technical writing. (Though, of course, you may get to do that when writing to friends or for other outlets!) But using simple(r) language does not make you a worse writer: Rather, if you manage to adjust your writing style to your target audience, then that's exactly what makes you a *good* writer!
Upvotes: 4 <issue_comment>username_2: My advice would be to be more direct. Many business and academic documents benefit from more meaty, direct, gutty writing.
You might even improve your own style, for English readers, if you change your attitude. Read the following advice:
<https://ocw.mit.edu/courses/media-arts-and-sciences/mas-111-introduction-to-doing-research-in-media-arts-and-sciences-spring-2011/readings/MITMAS_111S11_read_ses5.pdf>
In particular see the comments on page 5 about "English teacher beaming at you" and "emphasizing clarity and easy readability". Some of your comments in your question ('dumbing down', 'university students write university level') seem to me to show that you are too in love with showing off. Real good writing is much more about good ideas and good structure and clarity than it is about fanciness.
Upvotes: 5 <issue_comment>username_3: Thou shalt not dumb down thy writing, but don't make it a vain exercise of style
================================================================================
I'm a non-native English speaker, and let me put it straight: I may write in simple English, because limited are my English writing skills, but I don't want to read simple English because I want to enrich my vocabulary and grammatical constructions.
But whether you write for a native English speaker or not, write clearly, avoiding unnecessary verbosity just to show off your eloquence.
(And, honestly, stop wasting time using those readability tests)
Upvotes: 7 <issue_comment>username_4: As hinted at by other comments and answers, the very wording of the premise suggests attitudinal problems...
Clear communication is always the goal. Scholarly writing is not necessarily suppose to be "purely decorative" (that is, non-functional) ... of course depending on one's assumptions about the larger goals.
If a thing is simple, its explanation should be simple. If your claim is that any competent professional should be able to understand it, the writing should accomplish that.
Upvotes: 4 <issue_comment>username_5: Food for thought, to help you decide for yourself.
Style and substance go hand in hand. In a creative piece, style is more important and calls for that certain flair vocabulary helps us to achieve. On the other hand in a scientific journal, clarity is key. Where does your writing lie in that spectrum?
The point of communicating is to connect with your audience. Don't look at it as dumbing down your language; look to it as using 'appropriate' language.
The intention behind the writing is pretty important too. For example, if you are writing a literature thesis for university it would be reasonable to expect your reader to have an advanced grasp of the language.
Finally, this doesn't need to be viewed as an either or, there may be ways to communicate complexity through simplicity.
My personal opinion: complex vocabulary is overrated if the message can be achieved with simplicity- including in creative pieces. As the Bard once said: Brevity is the soul of wit.
Upvotes: 1 <issue_comment>username_6: My experience of academic writing, teaching on English for Academic Purposes courses and, lastly, seeing the results of native-speaker academics trying to 'dumb down' their language, suggests that most native speakers have an incredibly poor understanding of what makes a piece of writing difficult for a non-native speaker or reader to understand.
Your peers are used to reading papers in English and, no doubt, have read more succinctly put, more elegant, more descriptively adept pieces than you are going to produce—even though yours will be succinctly put, elegant and descriptively informative.
If you aim for the highest possible standard of academic writing, whatever that may be for your field, then your writing is more likely to coincide with the style, register, tone and range of vocabulary and grammatical constructions that your peers are already familiar with. There is no reason to depart from this. Indeed if you do, you are likely to cause your readers problems. And, of course, you are more likely to distract yourself from the task of expressing ideas in the most natural and effective way you can.
Upvotes: 3 <issue_comment>username_7: **Impress, but don't show-off**
Your audience wants to be impressed, but they don't want their time wasted (this is true whether you're writing to an admissions committee, a technical journal, or a picture book aimed at 6-year-olds). I've written and suffered for (and reviewed and condemned for) trying to sound important by using words where I needn't or expressing ideas in the most grand way possible. Oddly, after reviewing/writing somewhat for academia, it becomes second nature to do so, despite it not being the better choice.
The difference between well-written and showing-off is frankly reflected in a ditty by Dr. Seuss:
>
> It has often been said
>
> there’s so much to be read,
>
> you never can cram
>
> all those words in your head.
>
>
> So the writer who breeds
>
> more words than he needs
>
> is making a chore
>
> for the reader who reads.
>
>
>
> That's why my belief is
>
> the briefer the brief is,
>
> the greater the sigh
>
> of the reader's relief is.
>
>
> And that's why your books
>
> have such power and strength.
>
> You publish with shorth!
>
> (Shorth is better than length.)
>
>
>
Keep it clear. Keep it tight.
Upvotes: 1
|
2019/05/02
| 724
| 3,137
|
<issue_start>username_0: Sometimes when one receives referee's reports, the referee supplies alternative proofs for the results of the paper. Sometimes the supplied proof is much shorter and more elegant than the original proof.
Is it acceptable to include these proofs in the paper (without formally asking the permission from the reviewer) if the author emphasizes that this proof comes from the anonymous reviewer?
Many times I have done this, and the reviewer has never objected. But I am wondering if one should be more cautious and first formally asks a permission and if the referee permits then includes the referee's proof.
If that matters the subject is Mathematics.
Edit: Side information (following the comment). Assuming that the supplied proofs are for the auxiliary results of the paper, not the main results.<issue_comment>username_1: I would think that if the referee shared an alternative proof in their comments to the author, their very reason for doing so was so that you could include it in the paper. What else would be the point?
I don't think you need to ask for permission. It would be sort of like asking for permission to take a mint from that little bowl by the door of a restaurant - that's what it's there for. You can ask if you really feel it's important, but it creates a slight amount of extra work for the referee and the editor who has to relay your communications.
If the alternative proof is such a major contribution that you think the referee ought to become a co-author, then of course that is a separate discussion.
Upvotes: 2 <issue_comment>username_2: You can definitely include the superior proof, with attribution to the anonymous referee. Referee comments are made for the purpose of improving the paper, and revisions based on referee comments are expected.
Giving you a better proof of a theorem is above and beyond the call of duty, so I can understand why you might want to give a greater acknowledgement in that case. If you decide you would like to give a named acknowledgement for the referee (or even make them a co-author) then write back to the editor and propose that you would like to include a named acknowledgement for this part, and ask if the referee is willing to identify himself/herself so that you can give an acknowledgement by name.
Upvotes: 2 <issue_comment>username_3: Since the result in question is a side result, not a main one, I don't think you're required to ask permission, but I would suggest that you do so anyway, as a matter of courtesy. And the paper should say explicitly what the referee contributed --- a simplified (or improved or ...) proof of such-and-such result.
Looking at it from the referee's point of view, if an author asked me for permission, I'd give it. If an author used the proof from my report without asking for permission, I wouldn't worry about it. If the paper didn't say that this proof came from a referee, I'd be unhappy, not because I lose credit (an acknowledgement for an anonymous referee doesn't gain me any credit anyway) but because the author falsely makes it look as if (s)he invented that proof.
Upvotes: 1
|
2019/05/03
| 2,337
| 9,402
|
<issue_start>username_0: Last year, I finished and defended my doctoral thesis in a STEM field, and have continued working as a researcher in academia since. I am proud of my doctoral work, yet I feel somewhat uneasy about flaunting my newly earned title and refrain from doing so in general.
All senior people in my field have doctoral degrees, so it doesn't really signify anything. Besides, often when people use their title it seems like they do it to show they are an authority of sorts (Dr. Phil and Dr. Oz, for instance). In my field of work that doesn't get you very far; if you're full of baloney people will realize in minutes.
I'm also hesitant about signing off e-mails as a Dr.; there's usually some back and forth with suppliers we work with, for instance. We really depend on their technical experience and I wouldn't want to come across as pretentious or elitist.
My basic question is this: **in your experience, what are good circumstances to use ones doctoral title?**<issue_comment>username_1: This may vary by country. I'm in the US, and use mine rather rarely. As you say, senior researchers all have PhDs (or MS + many years' experience), so it's hardly something significant in such contexts. I also don't use it in personal correspondence (e.g., bank paperwork) since that might lead to confusion about being an MD ("real doctor"). So what's left?
* CV and other biographical documents
* e-mail signature
* formal presentations and documents (e.g., grant proposals)
* anything related to undergraduate teaching
Note, professors might use "Prof." rather than "Dr." for all of these, particularly in contexts where it is obvious that professors are a subset of doctors.
Upvotes: 4 [selected_answer]<issue_comment>username_2: Your question appears to indicate a general uneasiness with the use of degree titles, *even in contexts where expert knowledge in that field is relevant*. There may certainly be some contexts where use of the academic title of "Dr" constitutes "flaunting" your degree, and is pretentious. (If you're on a plane and someone has a heart-attack, and the flight attendants yell, "Is anyone here a doctor?" then I wouldn't recommend putting your hand up.) However, in academic and professional contexts where your skills and training are directly relevant to your authority in that field, it is not unreasonable to use the title that signifies your training. Few would regard this as "flaunting" your degree, and for those that do, it is likely to be based on a more general objection to titles *in general*.
It is worth noting that in an academic context, the title of "Dr" is not really that high an attainment, relative to other staff. (Titles of "Prof", etc., are generally more impressive.) As you point out, most academics have a doctoral degree, and that is gradually becoming the minimum expected training for an academic. Thus, you are totally correct when you say that this doesn't get you very far --- it is a baseline level for the vast majority of entry-level academics. If you use your title then that is fine, and if you don't use it, people will probably play the odds and assume that you probably have your doctorate.
Personally, I think **it is legitimate to use your title in any academic or professional context** (i.e., papers, grant applications, correspondence, email signatures, etc.). In such contexts, your education in your field is potentially relevant, and it is unlikely that anyone would hold your use of your academic title against you. Moreover, people do not expect you to have to change your email signature to remove titles in contexts that are less formal. When you have back-and-forth email conversations with people, your signature block will only appear in the first email, and after this you can use informal sign-offs, so it is unlikely you are going to look pretentious.
>
> All senior people in my field have doctoral degrees, so it doesn't really signify anything.
>
>
>
Kinda. But then, by implication, the *absence* of a doctoral degree would be unusual, and would signify something potentially significant. Use of your title of "Dr" indicates that you have the academic training that is standard for that field. Possession of a doctoral degree might also be required for some tasks in your field (e.g., supervision of doctoral candidates) and so it is legitimate to signify that you have this degree.
>
> ...often when people use their title it seems like they do it to show they are an authority of sorts...
>
>
>
And that is illegitimate *how*? Possession of a doctoral degree in a field (or a medical degree for MDs) is a legitimate indicator of expert training in that field, and therefore valid information suggesting that one is indeed an authority in that field. Having a doctoral degree in your subject puts you at a level of knowledge that is *far higher than the average person* and so you are indeed an "authority of sorts". You needn't shy away from the fact that you are highly trained in your field, and neither does Dr Phil.
Upvotes: 2 <issue_comment>username_3: Use "doctor" only in academia, only in formal circumstance, and then prefer the degree, *e.g.* Ph.D. In real life, "doctor" is a person with medical credentials.
I answer my phone, "<NAME>" in case it's a student on the other end. My email sig says, "Best regards, Bob" and then has "username_3, Ph.D." on another line.
Business cards, professional stationery, etc. should always use the degree, not the honorific. If you can afford engraved personal calling cards, *then* you can use something like "Dr. username_3"
And by the way, congrats!
Upvotes: -1 <issue_comment>username_4: UK perspective
==============
You should **always** use the doctoral title, both in professional and social contexts, **unless** the degree is an honorary doctorate.
Even if you do not feel strongly about the importance of the doctoral title, your failure to use it encourages society at large to be less respectful of academics, which is very unhelpful, especially for female and/or ethnic-minority academics (there is published research on the fact that, even at academic conferences, women are less likely to be addressed by their proper title than men), and especially when it comes to getting society at large to take the findings of academic research seriously.
See also <https://www.debretts.com/expertise/forms-of-address/professions/> (scroll to the bottom for a table showing that you address a person holding the doctorate as "Dr \_\_\_" in both formal and social contexts)
Upvotes: 0 <issue_comment>username_5: I think this varies by country. In the US there seems to be some unwritten rule that it is only used in a professional context(?). In Germany, it is much more restrictive and regulated. In the UK, it can be used in both professional and social context, but in practice no one really uses it. I am about to be conferred my PhD from the UK, and I almost never had to call anyone by their title - even Profs asks to be addressed by name without the title. In Asia, its alot more varied, and can depend on the hierarchical context. For instance if my boss/employer is a PhD holder, I would more likely address him as Dr. If the person is my colleague (say I'm an RA talking to a post-doc), they might be ok addressing them by name.
Upvotes: 3 <issue_comment>username_6: A lot of answers about US, which seems the focus of the question. So...
Germany
=======
Except those, who got their PhD in the US, the actual German title *is* "Dr.", in most cases something like "Dr. rer. nat.", "Dr.-Ing.", or "Dr. med.".
As a professorship includes the doctoral title in 99% of cases, but is not obliged to, the formal writing for a professor is "Prof. Dr.". And yes, people use those in email signatures all the time. You can even put a "Dr." in your passport, as there is a thin line between a title ("Dr.") and a job description ("Prof.").
Basically, as soon as you have your doctoral certificate, the "Dr." is part of your name here. You *can* omit it and many people do, because they don't like to rub it in other's faces. But, basically, you can demand to be addressed as a doctor, if you want to.
Upvotes: 3 <issue_comment>username_7: Nordic countries
================
I have never used my title as a title and I have only used another's title as a title in a playful way with a close friend. Titles might be appropriate when introducing a speaker or an expert, for example in the context of news.
Experience from Finland and a bit less in Norway and Denmark, but not from Iceland. Interpolating to Sweden is fairly safe.
Upvotes: 1 <issue_comment>username_8: Never. Never, ever.
Seriously, there is absolutely no need to use the title, ever, under any circumstance. I have never come across a situation where it might have been advantageous to use the title. Worse, if people are in need of a *medical* doctor you may find yourself explaining you are "actually not *that* kind of doctor" - which for the general populace means, not a *real* one.
Sure, for some appointments a PhD certificate may be required. A bored HR flunky will take a photocopy and file it somewhere.
(As for letters trailing the name, this seems to be a peculiar obsession of the English-speaking world in which you may indulge if you must. It always reminds me of Animal Crackers In My Soup for some reason.)
Upvotes: -1
|
2019/05/03
| 1,380
| 6,101
|
<issue_start>username_0: I have had poor supervision from both my supervisors, but one in particular. Overall, neither provided me much intellectual support while I was doing my thesis in psychology. Early on, they were a little helpful when I was recruiting for participants - they attended my ethics panel meeting and made edits to my ethics application. I hardly ever saw them or met with them to discuss ideas (like once a year). The analysis of my thesis was a sole endeavour. I received minor feedback with respect to paragraph structuring, spelling errors and grammar in the write up of my thesis. No substantial feedback that helped me with my analysis, theoretical arguments or conclusions.
Additionally, I have felt unsupported from a morale point of point. My main supervisor's communication towards me has been disrespectful and belittling. For example, she has embarrassed me in front of colleagues by implying that I overestimate my capacities on more than one occasion.
Now that I have finished my thesis and have passed examination, both my supervisors are keen for me to publish and have asked to catch up to discuss manuscripts.
I'm at two minds now. I don't want to give them intellectual credit for the work I did on my own. Yet, I also don't want to break down the decent relationship I have with my secondary supervisor. I have relied on this secondary supervisor to provide me with a good reference that ultimately won me prestigious position at a university. So I feel a sense of obligation towards her. The two supervisors work closely together so I couldn't possibly imagine having one included as a co-author and not the other - it would be extremely awkward and not really accepted.
The other thing that's playing on my mind is my inexperience with publishing. I haven't published a paper in years and feel trepidation about venturing out on my own. The reports I received from examiners were very positive and both said that my thesis was well written and publishable. But I still feel unsure about how to proceed and would benefit from practical guidance.
Any advice on how I should proceed would be greatly appreciated.
Edit: To clarify, the supervisors have requested we work together on translating thesis chapters into publications. Not new research.<issue_comment>username_1: Based on your account, I am surmising that you have already completed your PhD thesis, and obtained an academic position independent of your former supervisors (well done!). That being the case, you are under no obligation (whether legal, pragmatic, or ethical) whatsoever to involve your former supervisors in publication plans for work to which they did not make an author-level contribution. If you feel that (and, now that you are a Doctor of Philosophy, you should have some confidence in your academic judgement) you will advance scholarship more effectively by publishing alone or with different collaborators, you should have no qualms about doing so.
By the way, in my humanities field (in which sole-authored publications are the norm), it is perfectly acceptable, even encouraged, to publish without your PhD supervisor **while still a student**. I have two excellent supervisors, but it has never occurred to any of us to pursue a joint publication. In fact, funnily enough, I have recently completed a sole-authored book-chapter for a collected volume in which my principal supervisor just happens to also be writing his own sole-authored book-chapter (on a different topic, although relevant enough that I actually cite a few ideas from his chapter in mine).
Upvotes: 3 <issue_comment>username_2: To give better advice, you should provide more information regarding your Ph.D. journey. For example, who defined your dissertation topic and whether you're supported financially during your studies? I try my best to cover all cases as best as I could.
1. First, if you've previously (i.e., before your final defense) talked to your advisors about publishing parts of your dissertation, one might consider it bad faith if you do otherwise. One of the professors in my department still complains about a student who didn't deliver on the promise of writing the final paper of her dissertation after about 5 years.
2. If you want/need to maintain a good relationship with your advisors (i.e, work on joint projects, co-supervise students, etc.), what you intend to do is usually a deal-breaker.
3. I suggest that you check with your alma mater for their intellectual property regulations. Some institutions prohibit you from publishing work done with their money and equipment under another institution affiliation (as you're a professor someplace other than your alma mater, you might intend to put your new affiliation on the paper).
4. Some institutions do not include papers directly based on your Ph.D. dissertation in your tenure request evaluation. If that's the case in your current institution, what you intend to do would benefit you only in terms of reputation in your field.
5. If you're not experienced enough to publish your paper in a good-enough journal, your advisors might be able to help you with that. Their help might come in handy as one usually doesn't have any other person to read the paper critically and help with the revisions. Also, if prominent scholars, their name might be able to help you as well.
6. A simple compromise would be to write not all the papers with your advisors. If your experience with the first paper was positive, you might re-evaluate the whole situation differently.
If you're leaning towards writing alone, I think honesty is the policy. Talking to people about their mistakes, in general, would help them refrain from making them again. If you're unsure about how to talk to them, you might consult some senior faculty. Personally, I would talk to the second advisor first. Talk openly and discuss the situation and express that you're unsure whether their contributions are enough to secure them a place on the author list. Hear their voice and reasoning and then make your final decision. Good luck.
Upvotes: 0
|
2019/05/03
| 2,940
| 11,617
|
<issue_start>username_0: If one claimed that a particular scholar was "above average" or "noted" in their field, is there any good metric by which to support or deny such a claim?
Is there a generally accepted way to indicate that a particular professor or scholar is outstanding, or above average, in their field? I understand there are certain indicators, such as chairs, endowments, prizes, etc. But these don't really seem to help to compare one scholar to another, except in a sort of gross, simple count way (i.e. one professor has had more chairs than another)
Is it theoretically possible to create a "ranking" of professors in their fields, by some metric? Could their [h-index](https://en.wikipedia.org/wiki/H-index) serve as such a metric?<issue_comment>username_1: >
> If one claimed that a particular scholar was "above average" or "noted" in their field, is there any good metric by which to support or deny such a claim?
>
>
>
No. As a rule of thumb, this isn't the kind of thing that you can measure with a metric. <NAME> was the king of rock and roll. Why? Is it because he pumped out more albums than the others? Because he sold more? Because journalists wrote more about his albums than the others'? No. It was because he was the king and few people contested that. It's the same in academia. Either you can say that someone is "noted" in the field and be reasonably confident that you won't be contested when saying that, or you can't. If you can't, then you should avoid it, on pain of looking pretentious or like a toady.
>
> Is there a generally accepted way to indicate that a particular professor or scholar is outstanding, or above average, in their field? I understand there are certain indicators, such as chairs, endowments, prizes, etc. But these don't really seem to help to compare one scholar to another, except in a sort of gross, simple count way (i.e. one professor has had more chairs than another)
>
>
>
In general, you can't compare people. This isn't a video game, people don't have a numeric level associated to their academic ability, someone with a 12 being better than someone with a 5. It doesn't work like that. There is a multitude of factors, most often not measurable or not comparable. Trying to make a sum out of these and comparing the result for two different people will only lead to crap. Ask any hiring committee if determining who is the best candidate for a job is easy, let alone determining who is the best researcher.
>
> Is it theoretically possible to create a "ranking" of professors in their fields, by some metric? Could their h-index serve as such a metric?
>
>
>
God no. Of all the metrics, you've probably picked one of the worse ones. If I write two dozens pieces of trash that all cite one another and publish them in vanity press, I will have a great h-index. Will I be a good researcher? No. On the other hand, if I write a single article in my whole life solving the Riemann hypothesis, then I would probably become one of the most famous mathematician in the world overnight, but my h-index will be crap.
Upvotes: 5 <issue_comment>username_2: >
> If one claimed that a particular scholar was "above average" or
> "noted" in their field, is there any good metric by which to support
> or deny such a claim?
>
>
>
The only "generally approved" quantitative metric is the h-index. H-index is a metric, is OKfor your task as it allows you to define above or below average. As a matter of fact, this is the way some national educations systems stamp their professors as good enough for tenure. It is also agreed that it is not "good enough" - famously, <NAME>, 2013 Nobel in Physics, would fail miserably a ranking based on h-index only, as he published very few paper, although with huge citation count. Also, h-index is a measure of lifetime achievements, thus needs to be corrected for the academic age. Which brings us to the next point.
>
> Is there a generally accepted way to indicate that a particular
> professor or scholar is outstanding, or above average, in their field?
> I understand there are certain indicators, such as chairs, endowments,
> prizes, etc. But these don't really seem to help to compare one
> scholar to another, except in a sort of gross, simple count way (i.e.
> one professor has had more chairs than another)
>
>
>
Other, mostly qualitative metrics are regularly used, consciously or not, in academic's minds, although no official ranking exist. I will mention a few, the ordering only reflecting the stage in an academic career:
```
1. Institution where PhD has been obtained
2. PhD supervisor
3. national prizes
4. national grants
5. number of PhD students supervised
6. chairs at institutions or conferences
6. international prizes
7. academic success of PhD students mentored
8. more I could not think about now :)
```
>
> Is it theoretically possible to create a "ranking" of professors in
> their fields, by some metric?
>
>
>
Of course it is, there is entire field about it called Scientometrics. You have to 1) fix for h-index known limitations 2) combine with the variables above to come up with a more comprehensive algorithm that will rank any researcher in any field. The reasons why this has not been done before are twofold. First, it is not easy at all to define objectively how much every metric listed here should weight in the ranking algorithm. Second, and most importantly, academics rank every day for jobs, promotions, accepting papers or conference contributions, prizes etc. However, they prefer their ranking algorithm to suit their individual minds, rather than adopting a common framework.
>
> Could their h-index serve as such a metric?
>
>
>
As described above, h-index has many limitations that make it impractical for most purposes. But an entire field of research exists around it - [Scientometrics](https://en.wikipedia.org/wiki/Scientometrics) - so rest assured there will be developments.
Upvotes: 1 <issue_comment>username_3: >
> Is there a generally accepted way to indicate that a particular professor or scholar is outstanding, or above average, in their field?
>
>
>
The generally accepted method for assessing a particular scholar's merit is to familiarize oneself with their work. Such an assessment requires a solid basis of expert knowledge.
>
> I understand there are certain indicators, such as chairs, endowments, prizes, etc. But these don't really seem to help to compare one scholar to another, except in a sort of gross, simple count way.
>
>
>
This applies to any "metric", although some are worse than others. Any sound assessment would have to be *qualitative* and require some *substantive* engagement with the scholar's work. Therefore, any comparison, to the extent that it would be useful at all, could only point out qualitative differences that don't lend themselves to a ranking, "except in a sort of gross" way.
Upvotes: 3 <issue_comment>username_4: **Yes, there is one and only one standard method** that is universally employed by reputable academic institutions worldwide. This is how you evaluate a researcher:
1. **Read their papers.**
2. **Attend one of their talks.**
3. **Ask the opinion of other experts in the field.**
This is how hiring committees and promotion committees do their job. There are no shortcuts. Parts 1-2 require that you have some relevant expertise; if not then you must rely entirely on #3.
Every academic is regularly asked to give an expert opinion through reference letters, which often are expected to include some sort of ranking (e.g. *"Assistant Prof. X should be promoted because she is clearly as talented/better than Prof. Y who was recently promoted at Prestigious University Z"*). How does one justify this kind of claim in the reference letter? You guessed it:
1. Read their papers.
2. Attend their talks.
Upvotes: 6 <issue_comment>username_5: **All metrics** that used (e.g. number of first/senior authorships, sum of impact factors, percentile ranks of impact factors, citations, H-index, grants and other funding etc) have all their **advantages and many more disadvantages**. Never the less they are used in hiring processes in one or the other way because otherwise it is not possible to assess several hundred candidates that apply for a faculty position. Which of these factors are important in a certain sub-field is very different. Only the ones scoring top in these metrics will make it to the interview where then other factors might count as well.
For someone who is not familiar with a certain field the **easiest** (but still not always correct) **way** to see how good a professor might be is the **name of the university**. e.g. a professor at Cambridge will most likely have achieved a lot in his life. Someone at a no-name place will not have made much impact that impressed other people in the same field and if such a person does make a big impact one day then he will most likely get offers to move to a place with a better name.
Upvotes: 2 <issue_comment>username_6: If you want to rank two professors against each other, you might be tempted to use the h-index. Don't. As many of the other answers point out, it's a severely flawed metric, and it doesn't really tell you a lot.
However, if you want to figure out whether a given professor can reasonably be described as "noted" or "outstanding", then that is a quite different question. And here, yes indeed, I would say that you can use certain indicators, namely awards, honors and prizes. I do not think anyone disputes that a scientist holding a Nobel prize is outstanding. (Peace and literature, maybe not so much.) If a mathematician wins the Fields medal or the Abel prize, the same.
Many societies award fellowships. To get one of those, you have to demonstrate academic excellence, and often also things like service to the society in question, outreach, teaching etc. The advantage is that the "overall package" a professor offers has already been evaluated by people who are presumably experts in the field. For instance, [here is a list of the Fellows of the International Institute of Forecasters](https://forecasters.org/about/fellows/), which I happen to be involved with. Some of the Fellows are a bit contentious, but nobody from the field would dispute their being *noted*.
Best paper awards are similar.
Of course, you need to use a little expertise in deciding whether a Best Paper Award from a journal on Beall's list is truly a mark of excellence, or whether a Fellowship from an academic society that offers little more than a one-page webpresence is. But unless you go with the extremely well-known marks of excellence like the prizes I noted above, there is simply no shortcut that will avoid having at least a passing knowledge of the field.
And note that this allows you to decide whether someone is distinguished or not. It won't tell you whether A is "more distinguished" than B, like one might try to use the h-index to indicate. Which, as I argue above, is impossible.
Upvotes: 3 <issue_comment>username_7: The negative proof to the question here is far broader than academics: is there a metric for the best car? Best parent? Best programming language? Smartest person? No, because all these things have many orthogonal dimensions that simply can't be collapsed to one without unacceptable information loss. Researchers can be creative, well funded, methodical, hard working, well versed in literature, collaborative with peers/students, etc.
I concur with username_4's answer on what to do instead.
Upvotes: 2
|
2019/05/03
| 588
| 2,552
|
<issue_start>username_0: My advisor recently mentioned that the point of publishing throughout your PhD is so your committee doesn't have to read your dissertation to determine whether it's sufficient for a PhD--if you have enough publications in peer-reviewed conferences/journals, the committee can just rubber stamp the dissertation without effort. For context, the field is computer science and engineering.
Is really the point? Obviously publishing work throughout the PhD is important, but to me this seems a dumb reason. I thought the point was to be continually learning and developing as a scholar, not to publish a series of half-baked papers so your committee can avoid reading your work several years down the line. Perhaps I am being idealistic.<issue_comment>username_1: Some thoughts on this:
* It certainly helps the committee if (parts of) the thesis were
already assessed before - along the lines of "If somebody else already said that
this is good then it can't be that bad after all".
* It also helps the student to convince the committee that this is
mature enough for graduation (same argument as above).
* Some universities allow cumulative theses i.e. introduction + 3\*
pdfs of first authorship papers + discussion. (\* the number depends
on the field). This is mainly as writing a thesis with several hundred pages is something you will never need again even if you continue in science. Most fields just publish in short journal article nowadays and writing 200 pages about a content that can be covered in 5 pages is not a skill you need (and nobody wants to read these 200 pages (including your committee)) - sounds mean but it is like this.
Upvotes: 2 <issue_comment>username_2: One of the requirements in my school is that the **writing** in a thesis should be of "publishable standard". One way of quickly ticking that particular box is to have published some of it already, so in that respect, the examiner won't have to read the whole thesis specifically thinking "but is this writing publishable?". That will make their job slightly easier. Plus, peer reviewed publications can strengthen your argument. But (in the UK, where we have an oral exam, too) the examiners still have to understand the research well enough to ask intelligent questions and find any flaws. It seems false economy to eschew the full and complete thesis you have in your hands in favour of hunting down a short series of (page-limited) publications.
That said, it's also perfectly possible to pass with no previous publications.
Upvotes: 1
|
2019/05/03
| 803
| 3,473
|
<issue_start>username_0: I'm in a complicated situation. I was accepted to physician assistant school and was told I'm one of the most qualified candidates. I'm also a mother of 2 children (under age 3) and living with their father who doesn't want me to go to PA school (because I'll have more freedom). But not only is he physically and emotionally abusive, but he stopped paying for my cell phone and started throwing away my mail (from my school and everyone else) to limit my communication with anyone. (Note: I'm three months postpartum and have stayed home to care for my kids waiting for school to start, and have no income of my own). I communicated this problem with my school via email as they could not reach me by phone. Even though it was embarrassing, I gave them specifics of my circumstances. Yet I Iived up to my promise to keep up with their other requirements.
Yesterday, I was shocked to learn that they rescinded my acceptance because I couldn't be reached by phone during the last month (but did respond to all correspondence via email). I explained the extenuating circumstances, and even presented evidence from police reports proving what was happening at home.
To make matters worse, my parents and I signed a lease on an apartment a block from school and agreed to help me pay for it so I could have a stable living environment with less distractions (constantly wondering if I would be forced onto the streets out of my abusive partner's apartment). We already signed a lease for the apartment.
I know that I honestly put forth my best effort to fulfill pre-matriculation requirements and to communicate with the only means I had (the computer). I couldn't go in person because I have no car. Graduating from this program was my ticket to financial freedom out of this oppressive situation, and I feel like this school is going way too far to rescind my acceptance based ONLY on the short term termination of my phone.
How can I get the school to remake their offer?<issue_comment>username_1: Someone already mentioned this, but the best course of action is find a lawyer. They can discuss specifics with you and you may have a good case for a lawsuit. At the very least, a lawyer can tell you what your options are in this situation and maybe even just a letter from an attorney may be enough change their minds.
Upvotes: -1 <issue_comment>username_2: * Your highest priority should be getting safe from abuse. We're not experts on that, so please contact a local authority.
[Here is a list.](https://en.wikipedia.org/wiki/List_of_domestic_violence_hotlines) You may be able to get help without using a phone.
* Becoming a physician assistant is not the only way out of your difficulties. So don't panic about your acceptance. Higher education does exist to help people like you, but there are many ways to get education.
* Contact the Director of Admissions for the Physician Assistant Program and explain your situation to them. They will want to help you. If you cannot find this person, try the Dean of Students. If you cannot find them either, look for an Omsbudsperson. Your admissions problem sounds like it is caused by disorganization. If you contact the right person they will fix the problem. Since we do not know which school you are talking about, we cannot tell you exactly who to contact.
* Ask local doctors what they think about this program before you start it. Not all medical training programs are good.
Upvotes: 2
|
2019/05/03
| 532
| 2,330
|
<issue_start>username_0: So, I'm doing my undergrad in physics in India. I want to apply for research internships, but when I read professors' CV's and stuff they are doing reasearch on currently, I don't understand anything, it's way too complicated, I see things like " X-ray Binaries, Neutron Stars, X-ray Polarimetry , quantification of non-linear quantum correlations at very low light intensities. etc. ".
It seems way beyond what I can comprehend !
So, if I have no idea about what they are working on then should I apply for the reasearch internship ?<issue_comment>username_1: It's normal to feel overwhelmed with the complexity of such advanced topics at your level, the goal is to learn about it progressively. However if you don't even feel interested in discovering the topic, then there's probably no point applying.
In order to start evaluating your own interest about a particular topic before applying, it would be a good idea to read a bit about it. you won't understand everything of course, but this way you can get a sense of whether you like the topic or not. This will help you choose, and it will also help you get accepted in case you decide to apply, since an advisor is more likely to choose somebody who knows at least a little about their topic.
Upvotes: 3 [selected_answer]<issue_comment>username_2: Research broadly each of the topics that you might be interested in, and use that to help you narrow down the fields you are interested in, and then apply for research internships in those fields. Most of these professors are focused on extremely specific research topics and it is generally expected that undergrads like you will struggle to grasp the concepts as opposed to the concepts that are more generally taught in the classroom setting. You just want to get a general taste of what the field might be like.
Finally, when it comes to actually applying for internships, try seeking the help of an advisor of professor at your institution that you may know as to which field might be the best suited for you. In addition to that, figure out where your interests are, and apply to as many research internships in those areas (I'd even suggest contacting some faculty behind those internships for advice to get an idea of where you stand), because the worst they can do is say no.
Upvotes: 1
|
2019/05/03
| 996
| 3,677
|
<issue_start>username_0: I am an Indian and my son is going to complete high school (+2) in March 2020.
I am planning to apply for engineering/computer science for my son in any university/college in Canada.
Now, I come to know that the application end date is January 2020 for the courses starting in September 2020.
In India, school final exams happen in March/April and results are published in May every year.
My son will get mark sheets, transfer certificate and all documents in May 2020, after completing his high school (+2) exams. When applying for a college/university in India (which opens in July), we need to submit above mentioned documents, because whether we are going to get the college seat or not is **based on those documents/grade/marks**.
So, do I have to wait to apply for him in Jan. 2021, for the courses starting in Sept. 2021?
In other words, do the students need to wait year to study in Canada?
**As the schools in Canada ends in July/August**, how they are applying to university/college?<issue_comment>username_1: The application deadline is January, yet your son won't receive grades until May. Perhaps your son can apply with his predicted grades, rather than his actual grades, and the university can make an offer conditional on him obtaining his predicted grades.
---
I'm unfamiliar with the particularities of the Canadian system and my answer is based upon systems elsewhere.
Upvotes: 1 <issue_comment>username_2: Canadian Students typically apply before they have received their diploma, and acceptance will be conditional on graduation. They send their notes transcript up to this point and any relevant documents. Please note that many programs have multiple admission rounds and applying during the summer is possible in that case. Good luck!
Upvotes: 3 <issue_comment>username_3: Based on other answers, comments and my own research, I found the below.
From **University of Waterloo** website...
>
> **Sending your documents**
>
>
> Once you've applied, there are a couple of
> things you need to do before we can make an admission decision.
>
>
> You must submit your transcripts and documents (outlined below) by our
> document deadline.
>
> You can upload **unofficial** documents so that we can review your
> application and make a decision.
>
>
> **If** you receive an offer of admission, we will require **official versions** of the documents. Your **conditional** offer of admission will provide details. We cannot make a decision if
> we do not have all your documents, e.g., transcripts, English language
> test scores (if required), Admission Information Form (if required for
> your program). **If your mid-term grades are not available** by the
> deadline, please submit them **as soon as possible.**
>
>
>
and
>
> **Students from India:**
>
>
> Please arrange for your Grade 10 board examination results and Grade
> 12 **predicted** examination results to be sent to the Office of the
> Registrar.
>
>
> If you receive an offer of admission to Waterloo, we will also require
> your official Grade 12 board examination results.
>
>
>
**Related Useful links:**
<https://uwaterloo.ca/future-students/admissions/sending-your-grades>
<https://www.ontariocolleges.ca/en/apply/important-dates>
<https://uwaterloo.ca/future-students/admissions/application-deadlines>
<https://uwaterloo.ca/future-students/admissions/official-documents>
<http://www.electronicinfo.ca/deadlines>
<https://www.ontario.ca/page/study-ontario-international-students#section-0>
<https://uwaterloo.ca/future-students/admissions/admission-requirements/computer-eng/international-system/indian-system/>
Upvotes: 2
|
2019/05/03
| 308
| 1,273
|
<issue_start>username_0: How do professors find consulting projects to be a part of?<issue_comment>username_1: I think there are endless possibilities and no definitive answer. So just to name a few:
* A company rep sees a talk of the professor´s group and sees that they are a good fit.
* A company rep reads an article of the professor´s group and sees that they are a good fit. (Which not only includes articles in academic journals.)
* They know each other from former joint projects (like publicly funded ones with industry partners, ...).
* A former student/colleague works for the company and knows the professor is a good fit.
* They search online for professors/institues that fit their needs.
* The professor worked for them before going into academia.
* They get refered to him by other companies/institutes/individuals.
* ...
Upvotes: 2 <issue_comment>username_2: The university engineering department has strong contacts with the industry & companies over years and has tailored degrees and other qualifications to meet industry needs for years.
Industry partnerships (companies working together) have also funded special test equipment in the university and share the results, while research students do the work as part of their masters or PhD.
Upvotes: 1
|
2019/05/03
| 2,176
| 8,882
|
<issue_start>username_0: Our multiple choice exam was clearly too difficult.
* N questions.
* N + 10 total duration (in minutes).
* All questions have the same value (100 / N).
* Wrong answers have a penalty of 25% (25 / N).
* Only one correct answer per question.
What strategies do you recommend to consider reviewing the grades?
What we though of:
* Remove the penalty for wrong answers.
* For each student, increase the value of each correct answer.
* Add X (e.g. 10 points) to each final result.
* Add X% (e.g. 10%) to each final result.
* Remove the questions with worst performance.
We can combine options and each one has advantages and drawbacks. Note that we use absolute grading (not grading on the curve).
What is your experience?
**Follow-up:** Thank you all for the feedback. In the end, we decided to review each question. In some cases (four in total), we decided to consider as correct some of the incorrect options that were not 100% clear (and had a significant number of students choosing them). Plus, we added a bonus of +1 points to all students. We kept the penalty for wrong answers since it would be unfair to remove it.<issue_comment>username_1: There isn't enough information here to really give good advice. You suggest the exam was too difficult. It may also be that it was invalid. It may be that some of the questions were stated in such a way as to be misleading. There are measures of the validity of such exams, by the way. They measure the validity of a question by the proportion of students who answered it incorrectly compared to how those same students did overall. Some questions are negatively correlated with overall performance.
Another issue is what your institution permits. Some have very strict rules about this. They are IMO unwarranted and counterproductive, but they may bind you.
But absent such rules, you should care more about fairness than you do about numbers. To achieve fairness you may need to drop the exam or give an alternate. You may even want to rethink your overall grading scheme.
One simple modification overall is to give course grades based on the, say, 8 best of 10 assignments/exams/whatever. Then the question of a poorly designed quiz never arises. Another way is to have quizzes every day so that no individual quiz is determinative of much of anything.
However, some of the things that you might try will leave some students unsatisfied; especially the best students who worked the hardest. You can be kind to the strugglers, but not at the expense of the superstars. The situation is worst if the system itself puts the students in competition with one another for grades. Strict curve grading is IMO immoral as it makes it into a zero sum game where I can only advance at someone else's expense. If the system doesn't permit top marks for everyone (assuming it is deserved) then it is fatally flawed.
Your purpose, I hope, is teaching, not grading. Use the exam as a teaching moment. Even have a class discussion about the questions that caused difficulty. Try to learn why people did poorly. Even permit different students to have different sorts of adjustments as needed.
I used to have fairly strict rules about such things, but the understanding was always "This is the standard and you will do no worse than X if you do Y". And I always tried to make it an advantage to learn something even if was after the deadline or the exam. Make people want to learn, not just want to maximize points.
Upvotes: 4 [selected_answer]<issue_comment>username_2: In general, any change made to the grading system would affect some people positively and others negatively. Usually, I simulate all the ideas before committing to a new grading scheme. Although, in case of multiple choice exams, I would be more cautious as some of your exams rules influence how people answer questions (e.g., when I was a student, to avoid the penalty I usually refrained from answering questions, which I was unsure about their correct answer). If I was in your shoes, I would consider the following options (sorted by my preference):
1. Give an optional alternate exam for those who are not satisfied with their grade (this would be forbidden in some cases, in particular for the case of the final exam).
2. Give parts of this exam weight to a future one.
Upvotes: 0 <issue_comment>username_3: As a clarification: Were unanswered questions exempt from the penalty? If so, it becomes unfair to remove the penalty, as people leaving blank questions might be a conscient choice.
Removing questions with worst performance might also be unfair to students who actually knew how to answer them, unless these questions were really ambiguous ir flawed (such that there isn't actually a right answer).
The mathematically complex, but for all other aspects lazy method is to curve all grades.
You compute the mean grade (let's say 30) and their standard deviation (let's say 10). Then you pick a new mean and a new deviation that suits your taste (lets say 55 and 15). I.e. pick higher mean to have more students passing, pick higher deviation to have more discrepancy between notes.
For each student (start with the lowest grade), re-compute the grade by matching a gaussian cumulative distribution function, in excel [see this reference](https://support.office.com/en-ie/article/normdist-function-126db625-c53e-4591-9a22-c9ff422d6d58), the new grade would be given by:
```
NORM.INV( NORMDIST(old_grade,30,10,True), 55, 15)
```
Where I've already replaced the numbers with my example. Or without replacement:
```
NORM.INV( NORMDIST(old_grade,old_mean,old_deviation,True), new_mean, new_deviation)
```
Check if the lowest grade was increased or decreased by this process. You don't want it to decrease.
The idea of this method is that any test can be as difficult or as easy as the professor wants it to be (if the professor is good enough in designing tests). Hence, it's like you wanted students to get low grades on the test, but marked them with average grades in the end. The ranking of grades remains the same. You can fail a given percentage of the students if you want (let's say you knew that getting a 30 was difficult, but a student who got 10 actually knew nothing), you can also let everyone get a passing grade.
I strongly suggest not to abuse this system, and most of all not to use it to lower student's grades. Do it only if a test was very poorly designed.
Upvotes: 0 <issue_comment>username_4: If you simply reduce the total points, say grade as a percentage of 80 rather than out of 100, it essentially converts whatever questions they want into a potential 20 bonus points. Personally I like this better than free points, as it increasingly rewards students that do better, rather than being a freebie for all which compresses the scores together.
Dropping questions completely is a bad idea as others noted. But you can convert them to bonus questions. If questions are ambiguous though, you have little choice but to accept multiple answers. That's a separate issue.
As for my experience, if you seem to be too flexible on grading, you may encourage a lot of begging and whining. Pick how you want to curve them and present this as the law.
Upvotes: 2 <issue_comment>username_5: >
> What we though of:
>
>
> * (1) Remove the penalty for wrong answers.
> * (2) For each student, increase the value of each correct answer.
> * (3) Add X (e.g. 10 points) to each final result.
> * (4) Add X% (e.g. 10%) to each final result.
> * (5) Remove the questions with worst performance.
>
>
>
In my view, you must **avoid any alteration that changes the relative value of questions or answers** from the marks listed on the exam. This rules out options (1), (2) and (5) in the above list. If you were to use any of these options, it would disadvantage students who did well on the questions whose relative marks are reduced, or students who declined to answer a question based on the relative penalty for a wrong answer compared to a correct answer. Such alterations are unfair to those students and would be grounds for legitimate complaint and appeal of marks.
The only fair way to "regrade" an exam that was excessively difficult is to scale all the marks up with a simple positive affine scaling of the marks. This could entail a flat increase in the mark of each student, or a percentage increase, or any transformation of marks according to a positive affine function. This method of scaling preserves the relative value of all questions and answers in the exam, and preserves the relative marks of the students.
Upvotes: 3 <issue_comment>username_6: You're overanalyzing this. You just need to curve the results. Decide what you think was an A performance (perhaps a really low score if the exam was that tough) and what was a C and then interpolate. Problem solved.
Upvotes: 3
|
2019/05/04
| 831
| 3,205
|
<issue_start>username_0: Electromagnetism by EM Purcell is a classic book on electromagnetism. Luckily in my college library I found all the three editions of this book. After many months of observation, I found that the first edition published in 1965 had many concepts, practice problems and explanations which were omitted in second and third editions. The author explains in preface to second edition that it was necessary as these subtle points were either not suitable or were presented in a tough way for first reading hence were modified or omitted.
However I realize that they are invaluable for someone who has done a course already to learn and ponder on new tricky arguments but I was not able to find any PDF for first edition on internet and the books in my library are already in torn condition and may get lost within years. In this case what can I do to get this edition preserved and people from my college may continue to benefit from it?
The first edition of this book was funded by National Science foundation and I read on [Wikipedia](https://en.wikipedia.org/wiki/Berkeley_Physics_Course#History) that these books had some copyright relaxation, however I couldn't comprehend it. In this case, is it allowed to scan it and circulate its soft copy? - I thought of it or should I request the library?
PS: This is the copyright page from the first edition:
[](https://i.stack.imgur.com/qgaiE.jpg)<issue_comment>username_1: Discuss with the librarian - they may know what is available - ie it could be scanned while the book is refurbished and recovered.
And, take the opportunity to explain to the librarian **why** it is worth saving - you know why as a specialist in the subject...
Upvotes: 6 <issue_comment>username_2: Someone (<NAME>) has [actually tried doing this](https://github.com/bcrowell/purcell). Apparently the copyright status is not quite clear:
>
> As of March 4, 2014, the project is on hold because of the cloudy
> legal situation. The copyright page of the 1965 edition says to obtain
> a royalty-free license from EDC, which still exists. EDC, however, no
> longer owns the copyright to the 1965 edition. That copyright has
> changed hands several times, and now belongs to <NAME>'s sons,
> Dennis and <NAME>. Cambridge University Press has refused to
> tell me how to contact them, but has said they would pass on my
> request to them.
>
>
>
Regardless of copyright, I would recommend checking out Library Genesis. It is not legal, but it actually has a scanned version of the 1st edition.
Upvotes: 4 <issue_comment>username_3: *Possibly* -- based on author name, book title, the publishers and year of publication provided by you -- there is an electronic copy of this book at the internet archive ([here](https://archive.org/details/electricitymagne00purc)) in (encrypted) \*.daisy, \*.epub, and \*.pdf format in the section of *books to borrow*. You need Adobe Digital Editions as management software since each loan is for fourteen days, and "the library card" of archive.org.
It is a scan created with ABBYY FineReader 8.0, so it is OCRed, too.
Upvotes: 2
|
2019/05/04
| 1,347
| 5,691
|
<issue_start>username_0: I often hear the adage that graduate school never actually trains you to be a professor. I would be curious to know what were some of the biggest hurdles assistant professors had to overcome when they first obtained their jobs. Were these mainly interpersonal and managerial skills? If so how did you sharpen your skills in those areas?
Are professors as overworked as graduate students? It is a common fact that graduate students frequently suffer from mental illness and depression. Is this the same for professors? If professors are even busier than graduate students, how do they avoid burning out? How is the stress different?<issue_comment>username_1: Your question can be answered partly by the different time sharing:
[](https://i.stack.imgur.com/Hzwwm.png)
[link](https://www.timeshighereducation.com/blog/if-you-love-research-academia-may-not-be-you#survey-answer)
Other bigger changes for a professor:
Much stronger independence and responsibility and expected to be successful, while balancing all of this correctly. With all its implications. Being an academic role model within and outside of your group, leading your team, but also leaving room for creativity and self-development to your group members. And this is sometimes still difficult for full professors, some apparently suffering under this pressure or trying to bypass it by shifting responsibility to co-workers, wrong managing of division of labour, scientific misconduct... without this correct balancing probably much less questions would be asked on academia.se?!
Though, in my opinion the most important and inobvious and maybe most argued change is **that you are expected to give back to the society and public**. It's not like you have won in the job lottery and can lean back more like for industry or civil servants positions and stick to your 40 hour contract. There is a reason why most professors work many more hours a week, it's a **lifelong commitment to their job and its duties and privileges**. And therefore it is a important life decision one should make before starting to pursue an academic career ending within professorhip.
On the other side, coming up, defining and realizing good/competitive research ideas over decades can be mentally exhausting, also a sabbatical nowadays has become an off-academia phenomenon and in academia typically meant having really time for finding and outlining new research ideas and questions apart from the other time-consuming task above in the graphic.
Soft skills are necessary and helpful like in every other job, but not sufficient. Many universities offer such courses specially designed for students and postdocs and getting such certificates is helpful (project management, academic english,...)
Professors are typically not taught to teach, they learn this themselves and in my opinion every student has experienced this result in a better or worse way. But evaluation of teaching of a applicant for proffesorship is often a crucial criterion to become full professor. Therefore, a proficient language level (if no native speaker) is often necessary and good feedback/evaluations by the students, if the professor is not outstanding in acquired funding and publication track.
This article on a ["A year in the life of a new professor"](https://cen.acs.org/sections/year-in-the-life-of-a-new-professor.html) on jumping from lab to classroom is also a good read to get a up to date impression of young professors
Disclaimer: I'm no professor, postdoc on the road maybe to become one...
Upvotes: 3 [selected_answer]<issue_comment>username_2: I would say the biggest change is the necessity of multi-tasking. As a Ph.D. student or even post-doc, your responsibilities are focused and limited. You're working on a very narrow research question. Your teaching load, if any, is light. However, when you are a professor, you have numerous responsibilities with considerable weights.
1. Teaching: Your department might require you to develop and teach a new course. It could be time-consuming to develop a new teaching plan as well as notes. In addition, you might have to teach multiple courses per semester (depending on the discipline, usually between two and three courses). This makes the exam and assignment grading as well as holding office hours a fixed part of your schedule.
2. Research: Contrary to your Ph.D. dissertation, as a professor, you're usually involved with multiple projects. For example, you might advise or co-advise multiple students with different thesis topics. This makes keeping up with every one of them (e.g., advising, revising report/papers, helping with experiments, etc.) a time-consuming task. Also, the responsibility of managing research projects with industry and/or academic partners is of very importance.
3. Finding and securing funding and research grants: When you are a student, somebody else is securing the necessary funding to pay for your salary. When you are a professor, you are that person. This could be time-consuming and hard, especially if you have no prior experience with the necessary processes.
4. Service: Service includes many tasks such as reviewing journal/conference papers, organizing events (e.g., seminars, workshops, and conferences), examining theses/dissertations, etc.
5. Administration: A huge amount of your time, as a professor, will be spent in an endless series of meetings (such as group, department and faculty meeting). In addition, taking an administrative position would be a huge game changer, which would definitely affect your teaching and research plans.
Upvotes: 2
|
2019/05/04
| 1,475
| 6,259
|
<issue_start>username_0: I have submitted an article for a journal in the field of CS and it has been returned and suggested to pass thru a professional editor or proof-reading (this is because my mother language is not English). So far what I did was to pass my article to some online paid tools such as Grammarly and Paper Rater, checking the suggestions, lifted the corrections made by these programs (and being careful enough to check it first with some English grammar books). After that I sent the new version to a past Professor from my master studies and he returned me back the paper with some corrections (he is an English native speaker, but he told me that he was not a professional editor so maybe there could be some mistakes along the way). With all these changes I submitted again the paper, which by the way in the reviews that I got it was accepted by three reviewers, but one of them mentioned that the English needed more work. So I waited, and again it was rejected by the editor pointing me that still the English part needs to be corrected.
So what to do? I have found that there are some proof editing services online, but how to know if they are good? I have not seen the reviews of any of them and well the prices are not so cheap also (ranging from USD 350 to even USD 700). The services that I found were:
Enago, <https://www.enago.com>
Elsevier, <https://webshop.elsevier.com/languageservices/languageediting/pages/howdoesitwork.html>
American Journal Experts, <https://secure.aje.com/en/prices>
Wiley, <https://secure.wileyeditingservices.com/en/prices/quote/new> (this one its pricing form looks mysteriously the same as the AJS)
Any advice what to do in a situation like this? I would not like to submit to another journal because it passed quite a long time since this research started.<issue_comment>username_1: I don't have any advice about who to use, but I do suggest that you look to the long term. Try to find a person or a service that you will be able to work with on future papers, not just the current one. The person will get used to you and how you write and so the task will go quicker and maybe cheaper in the future.
Don't look for the cheapest service, I think. Your English skills will improve over time, but you will probably have a better experience if you can establish a relationship. You might even get writing tips from such an editor if you ask, after the person gets some familiarity with how you naturally write.
Upvotes: 3 <issue_comment>username_2: Find a friend: a native-speaker of English who is familiar with your subject. Failing that, there is a wonderful UK organisation, the Society of Proofreaders and Editors
which includes amongst its members people who are both skilled editors and who have specific academic knowledge.
I have used them, but I am not personally involved with them: this advice is disinterested!
Upvotes: 2 <issue_comment>username_3: I freelanced for some of these companies in the past. As far as I can tell, they do what they say they do - their editors are experienced in academic English writing, they will edit your manuscript until it passes the English check, and they will edit any revisions as well.
Hence the real question is whether you think the service is worth USD 350-700. Only you can answer that question, unfortunately.
Upvotes: 2 <issue_comment>username_4: Do you have any university in your area which a) employs native speakers of English or b) has exchange students who are native speakers (a + b: maybe even in a related field) or c) has very good English professors (even if they are not native speakers; you could talk to students to find out who is good at their job)?
Depending on where you live, there might also be local translation companies which employ native speakers or local freelancers which are native speakers of English.
You might benefit more from this type of relation, since you get to know the person and you could use their services again in the future.
You might also ask the journal editors if they have any suggestions for this. Maybe they have collaborated in the past and had good results with a specific company/freelancer.
Upvotes: 0 <issue_comment>username_5: [Declaration of interest: I am a freelance proof-reader operating as a sole trader and who does not undertake any proof-reading work for an agency or intermediary.]
My advice would be to engage a freelance proof-reader with some knowledge of your field (he/she need not be a specialist -- sometimes, the perspective of a not-quite-specialist-but-still-knowledgable person can be very valuable). Going through an independent freelancer is likely to be cheaper and better than using an agency (because there would be no commission and the freelancer would care more, because he/she does not have as many customers as one of the big agencies, and thus has more to lose if he/she does a bad job).
The difficulty, of course, is finding a suitable freelancer, since they tend not to be as discoverable as the big agencies. Another poster has already mentioned a relevant professional association in the UK. Other steps may include:
* send a circular to all your colleagues asking for recommendations -- there is a good chance that one of them would have experience with engaging a good freelancer who would also fulfil your needs (as a freelance proof-reader myself, a lot of my work arises from a personal recommendation, despite the fact that I have my own website);
* check notice-boards in academic departments and libraries, in case a suitable freelancer might have pinned an advert (personally, I do not advertise my services in this way, because I have enough work on my plate not to be desperate for more, but many people do);
* some libraries and societies maintain listings of freelancers on a website or can send such a list to you on request (relevant keywords may include "proof-reader", "editor", "editorial assistance", "translator", "typesetter", or even "proxy researcher" [somebody who visits a library, archive, or conference and take notes on your behalf -- useful if you need to consult a unique manuscript on another continent and do not have the time/money/inclination/visa to make a research trip yourself]).
Upvotes: 2
|
2019/05/05
| 807
| 3,493
|
<issue_start>username_0: Given that PhD students and professors often need to use different skill sets to accomplish their jobs, are there instances of people who had good but unremarkable careers as a PhD student, but much more success as a professor? If this describes you, can you describe why this might have been the case? I personally feel that my technical skills are comparatively average to my peers, but my writing, conceptualization and framing skills are above average. My own PhD journey thus far is fine, but not remarkable. Assuming I manage to find a faculty position, do you think I might be able to have a more accomplished career than when I was a student? How might I do this?<issue_comment>username_1: Yes of course.
Some PhD’s are really good at research, or they just prefer research.
Others however, like to pass on knowledge and when a student « gets » something they are pleased for them.
So being able to explain concepts to others and remembering or understanding **why** you found certain concepts difficult and using that experience to help other students can be very rewarding.
The downside is it comes with things like grading... But having clear marking schemes helps with that, both when you mark and when students want feedback.
Upvotes: 5 [selected_answer]<issue_comment>username_2: Some people are adept at helping other people understand and realize study material and also have new ideas even though them themselves maybe don't have so many "original ideas". There is a big need for teaching professors as well as research professors. Sadly these days seems many professors end up being neither.. just a vessel for acquiring grants and doing administration...
Upvotes: 3 <issue_comment>username_3: Depends on what your definition of a 'Professor' is. There are some professors who got to their position based on governance, teaching or/and research. In terms of research, I know of big name professors who operate more like a business person, so an outsider would think they are excellent researchers.
Upvotes: 2 <issue_comment>username_4: As I write this the other answers have missed one essential point, though they give good advice. But I'll note that your skill set isn't fixed at birth. You are what you make yourself.
Your early work is just that: the beginning. It may be good or not and you can do better or worse. That is up to you to a large extent. It matters how you carry on your career. How much you are willing to work with and learn from others can be a big factor. How relentlessly you follow your research ideas is another. How much effort you put in to the other aspects of being a professor. Are you willing to learn how to be a scholarly teacher? Will you do what it takes? Will you always give the same respect that you expect from others?
If you stay in academia, hopefully you get a good position and achieve tenure. Some people take tenure as an opportunity to get lazy (relatively speaking). Others take it as an opportunity to take on higher risk - higher reward projects. That is up to you.
There are lots of things. Some of them you have already practiced and others not so much.
But also note that to be an accomplished professor you don't need to be in the top ranks in every measure. You can focus on one of the key areas as long as you don't neglect the others. You can have the Dean gasp when you announce your retirement forty years after you earned your doctorate. The path is long and winding.
Upvotes: 2
|
2019/05/05
| 3,474
| 13,904
|
<issue_start>username_0: Some Ph.D. programs [charge tuition fees](https://www.maastrichtuniversity.nl/research/phd/phd-training-programmes). Others are [competitive](https://www.maastrichtuniversity.nl/research/phd/paid-phd-positions).
Does a [Ph.D. done by paying tuition fees](https://academia.stackexchange.com/q/64317/100406) have the same gravity as a Ph.D. done on stipend when it comes to an academic career?
When does it really make any sense to pursue a Ph.D. by paying tuition fees?<issue_comment>username_1: In the US, all schools that I know of charge tuition and fees. So there is no dichotomy of programs in this sense. Though some schools do not allow students to attend their doctoral program without having funding to pay that tuition/fees, either by a fellowship or from their advisor's grants, or perhaps via industry sponsor. I suspect this is because the faculty don't want to promise any time to advise students when they aren't getting funded for their own lab as part of the deal. The funding source, if there is one, then pays the tuition/fees. Beyond that, "self-funded" students are generally the same as those with fellowships; they can choose their project more freely and are not committing part of their time to being lowly research or teaching assistants. They may well have an industry job instead.
One drawback of course is a student is not a great judge of what is a good direction. Whereas the advisor's dogged pursuit of grants would tend to pull their own projects in the direction of more important problems which are probably better for the student's career. Choosing one's own direction also requires a patient advisor who is willing to continue advising while the student does run off and chases their own interests more freely.
At an opposite extreme is advisors who treat self- (or externally-) funded students the same as if the advisor was providing the funding anyway. I.e. they expect to set tasks and *supervise* the student, rather than simply *advise* them. Then you're paying out of pocket to be a research assistant. Which is certainly unfair.
As for whether there is some kind of ranking in the mind of hiring committees or firms down the road, regarding how you were funded, nope. A PhD is a PhD. It's primarily a hazing process anyway. The self-funded student will just be missing that line in the CV regarding your assistantship, which many do not put anyway since it is pre-doctoral.
Upvotes: 0 <issue_comment>username_2: Nobody will ever know. In fact, nobody will likely ever ask you about this.
At the end of the day, for an academic career, what matters is that you have a PhD and have shown an ability to do independent research. Who *paid* for the PhD never enters these sorts of considerations: You may have been funded on grants and only done research, or you may have been a teaching assistant to get a salary and have tuition paid, or you may have paid for the tuition yourself. It really doesn't matter, and nobody will care. What people do care about are your *qualifications*.
Upvotes: 4 <issue_comment>username_3: UK perspective
--------------
Unfortunately, self-funded PhD students are often regarded as *ipso facto* less suitable for an academic career than PhD students funded by a grant.
The reason for this is that, rightly or wrongly, an important criterion for many academic jobs **involving research** is "grant capture" or "research income". For an early-career academic applying for a job, grants obtained for/during PhD studies are valuable in demonstrating to a hiring panel that the candidate has a track-record of obtaining grants.
On the other hand, self-funded PhD students in the UK tend to end up with more teaching work, and **it is arguable that having lots of teaching experience is more relevant than "research income" for an early-career academic, since a lot of early-career academic jobs in the UK are "teaching-only"**. Many universities are wont to informally discriminate in favour of self-funded students when it comes to allocating teaching work (by the way, this practice is probably illegal, and is morally wrong, since funded PhD students still need lots of teaching experience if they are to be taken seriously by the academic profession these days, so I would argue that the allocation of teaching work to PhD students should be determined solely on the basis of who would do the best job for a given topic/module/course [declaration of interest: I am a fully funded PhD student, permitted to do up to six hours' teaching per week under the terms of my funding, but have been given more like six hours per term, despite being far better qualified for many topics than the self-funded students who got allocated teaching work on such topics])
Upvotes: 0 <issue_comment>username_4: Agree with the other answers saying no one will ever know how you funded your PhD. But, I don't see anyone addressing this:
>
> When does it really make any sense to pursue a Ph.D. by paying tuition fees?
>
>
>
Blunt answer: very rarely.
* If you already have a job in industry doing research and need a PhD to progress, it could make sense.
* Ditto for certain, extremely competitive institutions (e.g., Oxford).
* Perhaps in some countries, the financial gap between funded and unfunded positions is less wide.
But I usually don't recommend taking an unfunded position, especially in thu US, because:
* Professorships and similar positions in industry are incredibly competitive. If you're not currently "good enough" to get any of the ~thousand funded PhD slots, you should be realistic about your odds of eventually getting one of the ~dozen faculty jobs in your field that are open each year. Of course, it is not impossible, but I would strongly consider other options with a better risk/reward ratio. It could even make more sense to spend a year or two strengthening your application and then reapply for the funded position.
* Fiveish years of tuition fees + living expenses is very expensive. Even with a high-paying job, it can be difficult to pay back that level of debt, particularly since many industry jobs (and quite a few faculty jobs) tend to be in a high cost-of-living area.
Upvotes: 3 <issue_comment>username_5: France
------
In France, paying tuition fees is not the same as paying for your PhD. This is just a technicality in terminology, but I think it's good to have this information for completeness.
PhD students in France are in a really odd sort of limbo -- French academic system is organised into *institutes* which work closely with *Universities*. The institutes provide the Professors for the courses at the University, and in exchange the University acts as a primary pool to get students in for summer projects, internships and finally PhD programmes; and also serves as a host institution for PhD students. This means that a *PhD candidate* in France is, at the same time, an *employee* of the institute, and a *student* at the University.
A funded PhD in France therefore means the following:
* *An external body* (e.g. the French government, a foriegn government, or a company) will sponsor the PhD, ensure the **funds for 3 years of** (gross) **salary, publication and travel costs**, i.e. they cover the employment of the PhD candidate and expenses one is expected to incur during this time.
I'm not sure if equipment is something provided through this funding, or by the host institute. I think there's a good chance this funding might cover any visa or other immigration expenses, or the institute might, if you ask nicely.
* *The PhD candidate* **pays for yearly student tuition fees** to the host University themselves. This is often a hidden cost which nobody remembers to warn you about.
**Fortunately, this is a minimal cost: when I was doing my PhD, it was around €400 a year.** Not a pleasant surprise when you're on a measly PhD salary, but definitely affordable, and unfortunately **unavoidable**.
**Summary:** Obtaining funding for a PhD programme in France is *very competitive*. Proceeding without funding would mean you are doing a full-time job for 3 years for no compensation, and since not even your publication costs or travel costs would be covered, I doubt any advisor would accept such a candidate. On the other hand, *all the PhD students in France pay their own student tuition fees, which are however very low and affordable*.
Upvotes: 3 <issue_comment>username_6: The answers above say that no one will ever know how you funded your PhD. I somehow disagree with it. At least in social science where self-funded PhDs are not so uncommon and PhD programs often ask if you anticipate you can attend the program without any financial aids provided, the prospective hiring committees can learn how your studies was funded from your CVs. It is reasonable that you don't mention any information about your funding source if you are self-funded, but keep in mind that other PhD candidates on the job market would mention theirs if they received a grant. So if you don't mention this, it implies you don't have any grant. THEY WILL KNOW.
Tips for you is to find some CVs of PhD candidates in your subject/field to see how frequently they clearly state it.
Besides, how good is your dissertation significantly overweighs how your PhD is funded. This is of course true but not to everyone. The rejection rates of a postdoc position in a reputable research group or a faculty position are quite high nowadays. Once you're on a shortlist, you are faced with a cohort of candidates whose profiles are almost equally interesting to the hiring committees. Various factors play a role in the final decisions and usually the hiring committees will have deliberation unless someone is obviously much better than others. The grant you received in your PhDs is a signal. The signal tells them either that your past merits are not fully matching the reputation of your university (means if you request for fellowship, you're very likely to be not admitted to the university you're in but a much less reputable one) OR you're not that good among your peers (assume if you again fail to secure funding in your later years of PhDs unless you can prove that there is a convention in your program for PhD students to be self-funded). It's a negative signal even when other candidates and you are equally matched.
My conclusion is unless you assure you can do a great job achieving a good record of publication under the supervision of your advisor OR the universities you're going to self-fund are extremely excellent (as even your work there is mediocre, at least the title of you university can send you to a well-paid position in industry), let's say Oxford or Cambridge as what has been discussed above, self-funding is never recommended.
Upvotes: 3 [selected_answer]<issue_comment>username_7: All PhDs charge tuition fees, the question is not whether they are charged, but who pays. In those "competitive" cases you posted above a 3rd party research funder will be paying the fees.
You'll also note that to get one of the funded places "you must either be eligible for employment in the Netherlands or obtain a knowledge worker visa to qualify for a paid PhD position". This goes for the UK as well - a student must be British, or an EU citizen usually resident in the UK (and this will stop at the end fo this year). One of the most common reasons for people to do self funded PhDs, at least in the sciences, is because they are not eligable for a funded position, usually due to nationality.
Thus, being a self-funded student brings no judgement - all universities accept both funded and unfunded students, and many unfunded students are unfunded simply because they were barred from apply for funded positions. In the end, a CV will not say either way whether your PhD was funded or unfunded.
A 4-year self-funded PhD will set you back about 120,000 - 150,000 euros, including about 15,000 a year to living expenses.
Why do you want a PhD? I can think of three reasons:
1. Financial. This is subtle. For example, in Biology, people with PhDs earn *less* that those with just an undergrad degree. But thats because most undergrad degree holders don't stay in biology. Within Biology (industrial, commercial or academic), those with PhDs earn much more than those without. A PhD will cost you around 150,000 euros. Whether you will recoup this in the length of a career is something only someone in your industry can tell you.
2. You want to be an academic. Only about 2-3% of students, funded or unfunded, will make it as far as a permanent faculty level position. Now, if you are getting paid to do a PhD you might think its worth rolling the dice. But if you are paying for the privilege of only having a 3% chance of a job? Only you can say. Also bear in mind that if you were eligible for a paid position, and didn't get one, you must ask yourself why. There may be good reasons. Perhaps admissions tutors don't think highly of your school. Or perhaps you are in a minority that is discriminated against. But it is also possible, in the kindest way, knowing nothing about you, that you are not in the top part of your class. I'm not saying other people will imply this about you because your self-funded (people won't know, see above), but it might never-the-less be the case.
3. Because you love the subject, you've got the money in savings, and want something worthwhile to spend it on. Only you can say if this is "worth it". If a PhD will make you cry, it will stress you out. You will hate your subject and everyone in it. I large number of student experience mental health problems in a PhD. But, it can be the most special experience and meaningful experience, in-and-of itself (as opposed to a means to an end). 3-4 years to do something for no other reasons than *you* think it is cool/interesting/important.
Upvotes: 0
|
2019/05/06
| 1,479
| 6,322
|
<issue_start>username_0: [Research shows that studying in groups help students learn more effectively](https://source.wustl.edu/2006/07/discovering-why-study-groups-are-more-effective/).
If this is the case, why don't academics also conduct research in groups? There are already research groups, but in all the cases I've seen, each individual member of the research group works on a separate research question. If the current question is "reproduce the results of this paper", usually one individual member works through the paper alone (i.e. conducts research alone) and reports the results to the others.
It seems likely to me that the benefits of studying in groups should also happen when conducting research in groups. With someone else that's intimately familiar with what you're doing, you can understand papers better, catch coding bugs quicker, and so on. The sum is greater than the parts, and one can achieve more in less time. Nonetheless, I don't think I've ever seen a single PhD project with two assigned PhD students.
What is the rationale for not conducting research in groups?
**Edit:** I'm referring to the kind of group work that undergraduates might do when working together: one is next to each other when working, and frequently bounces questions off each other. Computer programs are written together such that both partners know exactly what each line is doing, experiments are conducted with two pairs of hands (or more), and so on.
My experience is that at research level, this doesn't happen. Computer programs can be written by many people, but usually each person is responsible for an individual section. Others know what the program is doing on a high level, but are not familiar with the nitty-gritty of other parts of the code such as what each line is doing. Expressions of the "I see what the code is doing, but I would not have written it like that" are common. Similarly, the dirty work of experiments (e.g. aligning mirrors in an optics experiment) is carried out by one person. Others might know what the experiment is trying to do, but don't get personally involved unless the main experimenter gets stuck (or if there is some kind of spectacular discovery).<issue_comment>username_1: As the comments indicate it is quite common for researchers to cooperate, but the extent to which that happens differs a lot between sub-disciplines.
However, I would argue that your analogy is false. Studying is all about acquiring existing knowledge or skills, while research is all about creating new information. What works well for one type of task does not necessary also work well for another type of task. So maybe there is value in cooperation in research or maybe not, but the study you quote does not help us answer it.
---
**Reaction to Edit of question**
When I collaborate I like to find someone that complements my abilities. This way collaboration in research tends to favor specialization. There can be some value to let two persons do the same task in terms of quality control, but that is also very expensive. In my case the expected benefit has always been nowhere near the cost, so I have never done that.
Upvotes: 6 [selected_answer]<issue_comment>username_2: I've no idea what field you're in but most research in most areas of STEM subjects *is* done by groups of researchers so, in the generality that it's stated, the question is based on a completely false premise.
Upvotes: 3 <issue_comment>username_3: The benefit of teamwork derives from division of labor, knowledge, and viewpoint. Unlike a team of draft horses, an academic or work team should not be evenly matched. <NAME> told us 300 years ago that specialization would make the process of pin-making fifty times more efficient. A team should divide the work into daytime and nighttime data collection, writing the database code, writing the front-end code, and a specialist to write the grant, create the figures, and edit the report. They should gather together to help each other over problem hurdles and suggest better approaches.
The most innovative of teams may well be pairs: Hewlett and Packard, Page and Brin, and Jobs and Wozniak. But a team of two can't do everything in a finite period of time. So you need to add people. Those people are each experts in their own field; that gives economies of scale. Empirical evidence shows that economies stop rising after team size reaches 250. Best to put the next set of people under a new roof. It's tough to recognize more than 250 faces. Getting cc'ed by more than 250 takes more time than it's worth.
As to the practice of dividing a class into teams of four, I suspect that it reduces the effort a prof needs to grade the projects. Speaking as a hiring manager and behavioral interviewer, I found questions about group work rich ground. I was instructed to listen for certain evidence in the response to those questions. Lots of time I heard that <NAME> didn't participate in the group but to eat the pizza and get the grade. Fine. But if from the interviewee grousing was all I heard, or that she took up the slack, a black mark went on the paper. Taking up the slack is not what team work is all about.
As to studying in groups, I think it fills the same purpose as recitations -- understanding that your classmates are encountering the same stumbling block you are. The group allows communication in terms common to the group, even if the prof doesn't use the same analogies and examples. If the group dynamic is such that subject material is divvied up, with an expert on each, that's brilliant. There is no better way to learn something than to have to explain it to another.
Upvotes: 2 <issue_comment>username_4: Please check [the author list of this paper](https://journals.aps.org/prl/pdf/10.1103/PhysRevLett.116.061102), which runs over nearly 3 journal pages towards the end of the paper, as an illustrative and representative example that people do work in groups. Certainly not everyone duplicates the work of others - this is inefficient and would require unimaginable training to get the proper expertise to get everyone to be proficient on every detail of such an experiment - but the LIGO collaboration can achieve outstanding results only by leveraging the expertise of hundreds of different persons.
Upvotes: 0
|
2019/05/06
| 877
| 3,829
|
<issue_start>username_0: I want to write a blog post which builds upon an author's paper.
He uses a formal notation and academic language for describing his findings, I want to rewrite the author's examples in simple and more popular programming languages, and use a more casual language to explain the same concepts. I don't know if this is a relevant detail, but this is also not the only goal of the blog post. It covers other related subjects as well.
As someone outside academia, **I wonder if this is considered OK and how can I properly credit the author**.
Also, **is using a standard citation format and clearly indicating the source of these ideas enough**?<issue_comment>username_1: You have two issues to deal with. If you properly cite and attribute the ideas to the original author, you avoid plagiarism issues. If you don't copy too much, but properly quote, from the paper then you avoid copyright issues.
But, ideas are free to use and to adapt. Simplifying what you find in a paper is a good thing to do.
One doesn't obtain *ownership* of ideas by writing a paper.
I therefore see no ethical issue at all with what you suggest.
Upvotes: 6 [selected_answer]<issue_comment>username_2: >
> I want to write a blog post which builds upon an author's paper...I want to rewrite the author's examples in simple and more popular programming languages, and use a more casual language...
>
>
>
Such activities are certainly okay and should be encouraged, you just need to properly attribute ideas to the author, which you can do with a standard citation.
You might like to consider co-authoring the blog post with the author, possibly publishing a technical report, which is perhaps more likely to be used by the wider research community.
Upvotes: 2 <issue_comment>username_3: As an example, you may be interested in [the morning paper](https://blog.acolyer.org), which does exactly that daily for one paper in systems / software engineering / programming languages / AI. Notably, from the entire setup it is *very* clear that the author of the blog is not the one who originally did the research. The original paper and authors are very clearly named, right at the top of each entry. The blog uses direct quotations fairly freely, but always in a distinct style that makes it obvious which quotes come directly from the paper, and what is the blog author's own commentary.
Even if your setup is a bit different, I think you can learn a lot by incorporating similar principles:
* Put a reference to the original authors and paper very prominently. I would be more explicit that a standard paper-style reference or link - this can get overlooked all too easily by a quick reader on the web.
* Visually distinguish what comes from the original authors from your own text and thoughts.
* Ideally, I would try to pack everything that you adapted from the original paper in subsection(s) of their own, with a big disclaimer at the top.
* Contrary, if you extend the paper, *also* make sure that it is clear when specific thoughts, arguments, or extensions are your own work. You don't want to be accused of putting words into the original authors mouth.
Upvotes: 5 <issue_comment>username_4: To some extent I have taken the same approach with two articles I wrote more or less recently: <https://ulvgard.se>
I try to make it clear that I only repeat results from the paper and add my own interpretation where I feel the paper could be more elaborate. This is of course different for you since you want to add on top of existing work. Take a look and see if there are some elements of my style that you want to use.
Working with papers this way is a really nice way to internalize the content and forces you to process all the tricky details where the paper is less clear yourself.
Best of luck!
Upvotes: 1
|
2019/05/06
| 1,837
| 6,825
|
<issue_start>username_0: I'm attending the thesis defense of one of my undergrad mentors this week. I owe almost everything about where I am today to him. I want to get him a gift, but I'm not sure what. He owns his own business, so a gift card seems pointless when he won't be short of money. I have never seen him eat or drink anything, EVER, so I'm not sure what kind of candy or food would be appropriate. He's a guy, so flowers are out.
We did have a shared love of a certain book series, but I already got him one of those for a Christmas gift and I feel like it might be weird to get another one again.
His thesis is something pretty abstract—even the name of the project has no real-world analogue. So a "cute" gift related to his thesis would be tough to come up with.
Also I'm a girl, if that matters, and I don't want to give him anything that would be creepy/weird/over-the-top with the gender difference taken into account.<issue_comment>username_1: Ask him for advice about buying wine. If you actually *get* advice, use it to select a nice bottle for him. If demurs about the wine advice, make a gift to the university in his name. They'll send a nice card and won't mention the amount.
Upvotes: 2 <issue_comment>username_2: It is fairly typical for a student to give a thank you gift to their mentor when finishing. These gifts cause minor issues:
[Is it ethical to accept small gifts from students?](https://academia.stackexchange.com/questions/23884/is-it-ethical-to-accept-small-gifts-from-students)
[Is it appropriate to buy a "thank you" gift for a PhD supervisor?](https://academia.stackexchange.com/questions/28149/is-it-appropriate-to-buy-a-thank-you-gift-for-a-phd-supervisor)
but you are not talking about a thank you gift, but rather more of a congratulations gift (e.g., like a wedding or baby gift). These are also awkward and the general advice is a group gift
[Advisor's wife is having a baby, should we be getting him something?](https://academia.stackexchange.com/questions/14087/advisors-wife-is-having-a-baby-should-we-be-getting-him-something)
[Advisor getting married, should grad students chip in for gift?](https://academia.stackexchange.com/questions/5694/advisor-getting-married-should-grad-students-chip-in-for-gift)
Thinking about this in light of you having given a Christmas gift
[Is it appropriate to give university lecturers Christmas cards?](https://academia.stackexchange.com/questions/32298/is-it-appropriate-to-give-university-lecturers-christmas-cards)
makes me think this is all very weird. I would not want a congratulatory gift (especially one of any monetary value) from an individual student. If he is a mentor of a number of students, or works in a lab with other doctoral students, maybe you can take part in a group gift/card. I would avoid doing anything as an individual. This becomes especially true if you have not graduated yet.
Upvotes: 3 <issue_comment>username_3: Actually, the thing that would be most appreciated - and valued - is a hand written letter on nice stationery giving congratulations and thanking him for his help in your own work.
Short, professional, sincere.
He will save it forever.
Upvotes: 6 <issue_comment>username_4: A short (1-2 paragraph) hand written note on a card or postcard would be an appropriate gift.
I have mentored about a dozen undergraduates and I still have the thank you notes they sent to me on my pin board in my office. The notes are nice reminders about the impact I had on the students.
Upvotes: 8 [selected_answer]<issue_comment>username_5: This is culturally dependent. You should follow local cultural norms.
When I was in Sweden, the PhD candidate would, after their successful PhD defence, organise a party, with typically between 30 and 50 people invited, such as family and friends from the office and outside. If they were supervising a few undergraduate students, they would probably invite them. At such parties, it was entirely common that people would bring gifts, which would all be placed on a table. If such a party exists and you are invited, by all means bring a small gift, similar to what you would bring if invited to a birthday party. I received food, books, souvenirs, and a few joke presents.
However, if such a party does not exist or you are not invited, and such gifts are unusual in your location, then it may be more appropriate to stick with a card such as suggested in other answers.
Upvotes: 3 <issue_comment>username_6: Contact the school and find out the dimensions of the diploma. Then purchase a frame and matting so that he can hang it nicely in his office.
This is totally appropriate and specific to the occasion.
Upvotes: -1 <issue_comment>username_7: One gift that is often really appreciated is a nice pen. Not a cheap one, but a fancy one, in a nice box. In some cultures it is the habit to not buy it yourself, but to wait until someone offers it to you. You can get them from 50-200USD.
<https://upload.wikimedia.org/wikipedia/commons/thumb/7/7b/Fountain_pen_writing_%28literacy%29.jpg/1200px-Fountain_pen_writing_%28literacy%29.jpg>
Upvotes: 2 <issue_comment>username_8: I agree that it's culturally dependent, but in many cultures, you could bring sweets or something else that's edible, for after the thesis defense. I would personally consider **chocolate** from a higher-quality brand. It's not something of great value, but it's not something you'd just by at the corner kiosk; it is associated with some level of affection, but - unless you buy a heart-shaped box - it won't be construed as romantic. Most people (in my experience) like it, but at the same time it's acceptable not to open and have it right away if they don't want to.
As for a thank-you letter - that may be appropriate in general, but I don't see how that relates to that mentor's thesis defense in particular. Perhaps a congratulatory card which also has a couple of thank-you lines would go well with the chocolate though.
PS - Chocolate is also kind of a generic gift. It might well be better to get something which would be specifically appreciated by your mentor. But this answer is intended for other readers as well...
Upvotes: 2 <issue_comment>username_9: A nice matted and (optionally) framed photo of the city/town in which the university is located (e.g. from a local photographer) might be a nice gift, especially if it has an iconic skyline or downtown. This is even if the mentor doesn't plan to continue in the same city, because he still spent a number of years there and it had an impact on his life. It's also a nice relatively neutral item that can help decorate a future office.
In addition, +1 to [username_4's answer](https://academia.stackexchange.com/a/130176/36320) about a thank-you/congrats card with a paragraph or few from you.
Upvotes: 1
|
2019/05/06
| 1,518
| 6,212
|
<issue_start>username_0: I know a PhD student (X) who is soon to defend, reaching the end of his funding, and has thus been applying to PostDoc positions. He was applying to two positions in two different groups.
The first group (Y) invites him to present in person and suggests that he buy his own tickets, where expenses will be reimbursed upon arrival; he is very interested in the position and complies, buying the flight at considerable cost (15+ hours flying on two legs).
After buying the ticket, the second group (Z) requests an online talk and interview; he again is very interested in the position and complies, presenting the talk and doing the interview online. Shortly after, the second group offers him the position. Being very interested in the position and not wanting to appear unsure or ungrateful, he accepts a few hours after the offer is made, and a week before being due to travel to the first group.
Not wanting to be feel deceptive towards the first group, he tells them about accepting another position and offers to travel anyways in order to give the talk he has been preparing for their group. The first group tells him not to come and that they will not reimburse any costs.
---
Question: *is it ethical/unethical for the first group to refuse to reimburse costs?*
I'm also interested in anecdotes, similar experiences, etc., to get an idea of how common or uncommon the first group's behaviour in this situation is.
On the one hand, the original reason why the first group offered to reimburse costs is now off the table.
On the other hand, the student has acted perfectly honestly throughout (almost to a fault) but ends up out of pocket having bought tickets at the first group's request.
(Of course someone has to take a loss, but in my mind, it should be the group who takes the hit, not a PhD student soon to run out of funding, and who again was simply complying with the group's instructions to remain in their hiring process ... and is now getting screwed in the process.)
---
(There's a couple of related questions [like this one](https://academia.stackexchange.com/questions/17721/should-i-attend-a-job-interview-after-ive-already-accepted-another-offer), but I don't find a question that addresses the issue of the interviewee being out of pocket.)<issue_comment>username_1: **Yes**, it is reasonable, justified and common for a university not to reimburse travel costs if
* the interview was not attended; **or**
* the interview was attended, the offer was made, and the candidate rejected the offer.
Your situation is quite similar — the candidate withdrew before attending the interview, which also, I think, makes it inappropriate for them to apply for reimbursement.
In your example, to avoid a risk of not being reimbursed, the candidate could:
1. Avoid making travel arrangements in advance, and only buy a ticket when they are completely determined they will travel for the interview; **or**
2. Choose a refundable tariff for the flight; **or**
3. Buy an appropriate insurance to protect against such event; **or**
4. Do not rush accepting an offer from another university (and risk losing it, of course) to attend another interview.
Your question seems to only focus on ethics applied to University policies. However, the candidate also has made certain choices it is appropriate for them to accept their responsibility for consequences. It is a pain to lose money like this, but hopefully the position in another university is worth it.
Upvotes: -1 <issue_comment>username_2: (This question is in my opinion hard to read because of the variables X, Y, Z.)
My ansatz to this problem is to consider the industry equivalent of this. A very big difference between academia and industry is (in my experience) that academics often have to pay out of their pockets (especially junior academics) and get reimbursed later, while in industry everything is paid via a company credit card. So when it comes to money, there is far more trust involved in academia. Such a system can only work if the reimbursing insitutions are trustworthy.
(Other examples are: in academia, sometimes contracts arrive very late or not at all; working without a contract is not at all advised in industry. Or here on this site, I read that people who ask if a certain behaviour of their advisor is correct get asked "Why do you ask with them if you don't trust them?").
So in industry, the institution would pay for the student. So the loss would totally be on their side.
This would be my approch to this.
(I have never heard of an institution behaving like this especially since the people who decide whether the student should come and the person who pays are usually different. Especially if this is the case, I do find the behaviour of the institution very unethical.)
Addendum: In the (maybe similar) situation that the student is promised reimbursebent but cannot travel due to sickness, I do think it's also ethical for the institution to pay for the costs of the trip.
Upvotes: -1 <issue_comment>username_3: **Both parties are at fault for not communicating the terms of the reimbursement clearly.**
* It is reasonable for the university to want to reimburse only interviews that were actually attended, and to not want to bother interviewing applicants who have already declined.
* It is also reasonable for the student to want to get reimbursed when they have personally expended funds in good faith.
It's a bit difficult to even say what the university's policy for such things should be:
* Not paying these expenses encourages applicants to lie, go through with the interview, and waste everyone's time in addition to money.
* Paying these expenses encourages people to get an interview, cancel, and then use the free tickets for a vacation (doubt this happens all that often, but it's possible, and can be a major source of scandal).
Regardless: in the real world, I think it's rather unusual for institutions to reimburse interview expenses for an interview that never happened. Given this reality, applicants are the ones that have the burden of establishing the reimbursement terms before shelling out their own money.
Upvotes: 4 [selected_answer]
|
2019/05/07
| 934
| 3,997
|
<issue_start>username_0: I am presenting a poster at ICBC, which is sponsored by the IEEE Communications Society (ComSoc).
I am planning to use two logos for my poster.
I am considering the logos of the following:
1. The research organization. The presented research is from researchers of this organisation.
2. ICBC, the conference organizer
3. IEEE Communications Society (ComSoc), the conference sponsor
4. Logo of ICBC and IEEE, indicating both organizer and sponsor.
Which two logos are suitable for the poster presentation.<issue_comment>username_1: The logos on your poster should indicate the author's affiliation. Use the logo of the presenting author's university.
Never use the organizer, sponsor, or host logo unless you work for them. People attending the conference already know what conference they are at. This information is not needed on a poster.
Upvotes: 7 [selected_answer]<issue_comment>username_2: Well, I'm *against* the overusage of logos, especially sponsor logos. There is already a big trend of forcing sponsor logos everywhere, including peer reviewed articles.
So please, do not support logo frenzy. If you insist on a logo, use the one that identifies you as the author, i.e., your department or your university logo.
Upvotes: 4 <issue_comment>username_3: The norm is to include the logo of researcher’s organization for sure. Apart from that, you might also want to include the logo for your funding agency or organization. Any other logos you might want to include can go at the bottom of your posters, such as the logos for your collaborators’ institute, any specific facility you used for data generation or analysis etc.
Upvotes: 2 <issue_comment>username_4: From the top of my head I can only come up with 2 legitimate use cases for logos in posters:
1. **Representation:** Do advertising for your affiliations, i.e. your university/institution and/or department. The reasoning behind that is to make it crystal clear and immediately recognizable where you work.
2. **Acknowledgement:** If your work is sponsored by some 3rd party or your project is part of a larger collaboration (like an ERC project or something of that sort) and they want you to graphically acknowledge them.
Logos are neither design (functional elements and/or considerations for making the material more easily accessible to your audience) nor decoration (elements that make your material look more aesthetically pleasing).
Typically you are not in full liberty to choose what logos you should have, institutions have a graphical policies clearly stating what rules you should go by when it comes to that sort of stuff. Also note that in some conferences the amount of logos that are to be present on posters may be limited.
PS: my definitions of design and decoration above are quite rudimentary but should hopefully convey the difference between the concepts. I once saw a great presentation on that but can't seem to find it now.
Upvotes: 2 <issue_comment>username_5: An approach I've often used is to use my institution's logo at the top on one side and on the other side either:
* The (only) collaborating institution. The point of this is to reinforce the joint nature of the work. *Or*
* A group logo (for partial visual symmetry).
This seems to work well visually, and satisfies the corporate identity requirements that the university would like to impose without being too onerous. It wouldn't work as well for a large collaboration, unless the collaboration had its own logo to go in the second place.
Acknowledgements could be made in the form of logos, and some funders like it. THese would be in an acknowledgements section at the bottom, and fairly small.
Conference logos are a waste of space *at the conference*. If you plan to reuse the poster internally after the conference (many places put them on the wall outside the lab, for example) it may be worth including the conference logo and month/year, but again fairly small and at the bottom.
Upvotes: 2
|
2019/05/07
| 406
| 1,582
|
<issue_start>username_0: I received a referee report from a journal (with single blind peer review policy).
If that matters, the journal uses "Editorial Manager" system.
By clicking on "View Attachments", one can see 5 these information "Action", "Uploaded By", "Description", "File Name", "File Size".
The content of the column "Uploaded By" is "Editor".
The content of the column "Description" is "from prof. X"
Hence the identity of the referee is disclosed.
I am wondering if it is an unintended mistake from the Editor or it is the referee who has written such description.
The referee has done a substantial work to evaluate the manuscript and has proposed many suggestions leading to the improvement of the manuscript.
My dilemma is that, as I know the identity of the referee, should I use his real name in the acknowledgement or just thank an anonymous referee?<issue_comment>username_1: Ask the editor.
Probably it is a mistake, but it is not your fault. I do not expect this to be to your detriment (and they might already have noticed it themself) and you seem to have a dilemma what to do.
Upvotes: 3 <issue_comment>username_2: If the review itself is not signed, it sounds like the unblinding was not deliberate. I would:
* Reply as if the review were anonymous
* Notify the editor in a separate, private message saying there may have been an error in showing the reviewer name
Upvotes: 6 <issue_comment>username_3: I would ignore the inadvertent disclosure of the referee's name. It is unimportant. Do not name the referee in your manuscript.
Upvotes: 5
|
2019/05/07
| 318
| 1,333
|
<issue_start>username_0: **Science and Orthographic/Typographical Errors**
---
Some people appreciate the attention to have their orthographic/typographical errors corrected/pointed out.
Others find it uncool to just point formal errors, finding it ideal to jointly comment on the content.
Given that, in terms of scientific rigor, how important is the orthographic/typographic rigor? And how does that vary?
As a side note, on a personal level, I find that reading a scientific text with orthographic/typographical errors is quite a turn off.<issue_comment>username_1: If you put divide by A instead of divide by B then there can be serious consequences...
But any non-serious typo is ignored by most of us (a been there, done that myself approach), with a wry smile...
But if a document contains many typos then it does get noticed...
Upvotes: 2 <issue_comment>username_2: Orthography and typographical correctness are much less important than scientific (methodological and theoretical) rigour. Yet all scientific publications should of course also be well-written, and in general there's nothing wrong with casually pointing out a typo, wrong glyph, or similar glitch.
However, if you're writing a review, grade a student paper or discuss a presentation, this better not be your only comment!
Upvotes: 3 [selected_answer]
|
2019/05/07
| 896
| 3,592
|
<issue_start>username_0: I have been nominated for an award to PhD students. I have first been asked to submit a CV and a description of my research project(s). After that, I have been asked by the organisers (directly by email) to submit "a short statement about your research experience and scholarly approach (not more than one page)".
I would like to know what kind of information I am expected to include in that document.
There is not a public announcement of the award describing the requirements, and I feel a bit concerned about asking them directly for more information, since they might consider my ability to write that document on my own (without further description than that above) as something to qualify.
Any suggestion will be appreciated.
(English is not my mother tongue. I believe that's why the terms "scholarly approach" don't say anything specific to me)<issue_comment>username_1: Actually, I don't think "scholarly approach" is obvious even to a native anglophone. Consider the following a guess. Others may supply additional suggestions, I hope.
If you have rationalized how it is that you find (have found) interesting research problems, then you can say something about that. How do I go about deciding what is worth exploring. That is a useful question for any researcher to be able to answer. It also seems related to things about your research experience.
Since you are being considered for an award, I assume that you don't need, anymore, to rely heavily on your advisor(s). So I suspect that you may have given that some thought already.
Upvotes: 2 <issue_comment>username_2: As a native speaker, I am confused by that phrase as well. The term *Scholar Approach* lacks a strong, concreted definition. For example, one author uses it to describe work life balance in an [article](https://www.insidehighered.com/advice/2014/02/21/take-scholarly-approach-work-life-balance-issues-essay) and how to frame one's career in a [second article](https://www.insidehighered.com/advice/2014/02/14/bring-scholarly-approach-your-graduate-training-your-career-essay).
I could see two reasons for the essay prompt using the words *research experience and scholarly approach*. First, if the award is open to non-science fields (e.g., the humanities), sometimes they use terms other than *research* to describe their activities (e.g., Harvard's Graduate English program uses the term "scholar works" on their [webpage](https://gsas.harvard.edu/programs-of-study/all/english)). Second, they may be looking for a broader statement about your approach to your research, learning, and teaching.
Given this prompt, I would describe my research and its impact. For example, my outline might look something like:
* Paragraph 1: Overview of your research
+ 2-3 sentences about why you are passionate about the topic (e.g., I am researching cancer because a family member died from it or I have always loved nature and am studying Monarchs Butterflies because I am concerned about their decline).
+ 2-3 sentences about why society should care about your research
* Paragraph 2: Specific that make your research species
+ My methods are special because...
+ My methods tie into the group giving the award because (write to your audience here).
* Paragraph 3: Taking your research into the world
+ Describe how you use your research and new ideas in teaching, or other outreach areas
+ What is is the current or possible applications of your research
Upvotes: 1 [selected_answer]<issue_comment>username_3: Your "scholarly approach" is your research methods.
Upvotes: 0
|
2019/05/07
| 729
| 2,847
|
<issue_start>username_0: As a young academic recently flown from the nest, I am starting to figure out my own research interests. I now feel like I have too many, though. During a postdoc where I struggled to fit in and produce papers, I started or agreed to take part in something like 8 projects, a couple of them well outside my field, requiring the acquisition of new skills. I've tried principal-component-like analysis to prioritize projects, splitting my week up into chunks of time devoted to particular projects, and various other methods to make meaningful progress. I think, though, I really just need to decommit myself from a couple projects or push them off by six months. What is the best way to do this without building up a bad reputation and alienating future collaborators? Or will most senior folk understand the situation as the mistakes of a young buck and not take it personally?<issue_comment>username_1: Better to withdraw explicitly rather than just disappear.
After you decide which are the projects that you can and must stick with, go to the leader(s) of the others - in person - and tell them you must withdraw. Tell them your "youthful enthusiasm" overcame your good sense and you are in danger of shortchanging everyone's expectations and that you value their work too much to get in the way at this time.
A good time to be a bit honest and humble.
But this sort of thing has to be done face to face. Email isn't going to be a good vehicle. If face to face is actually impossible it is much harder. In such cases you may want to offer, in email, say, to stay connected but to take a lesser role. That is the sort of thing that would likely come up in a personal meeting in any case.
---
Note that you don't have to be a youth to have "youthful enthusiasm". See, for example: [Paul McCartney age 73](https://torontosun.com/2015/10/18/paul-mccartneys-boyish-enthusiasm-shines-in-toronto-show/wcm/da168416-6894-482b-b3e5-5cf35facb998)
Upvotes: 4 [selected_answer]<issue_comment>username_2: When I moved to my first faculty position, my advisor told me *your time is your most valuable resource, be very careful how you commit it*. It has proven to be very true, especially since I had my baby. As a young academic I drowned myself in commitments to a point I almost had a burnout.
I will advise you differently than the rest:
* If you have committed explicitly to deliver something, then you have to deliver that. Your “street credit” is on the line. As a young academic, and still doing a postdoc, the last thing you need is for people to think you bail on your responsibilities.
* If these are side projects, meaning just some group emails or initial discussions, then just “let them go”. If people actually need your input, they will ask you explicitly. If not, then your better not meddling.
Upvotes: 0
|
2019/05/07
| 741
| 2,843
|
<issue_start>username_0: I'm an MPhys student who almost finished his undergraduate. I'm doing a masters in theoretical physics next year, probably at Kings College London. I have been rejected by Imperial and Cambridge(Part III).
Keeping in mind that theoretical physics is one of the most competitive academic fields, is there any chance of being accepted in top universities(UK or US) with such a masters degree? How high should my grades be in order to have a fair chance of being accepted at top 10 universities?
I would really like to do my Phd in a top university so that I could maximise my chances of getting a good postdoc position. More than that, I really love theoretical physics and I'd be VERY happy if I could contribute in world leading theoretical research in fundamental physics.
Edit: Especially for people with similar past experience, I would really appreciate your input.<issue_comment>username_1: Better to withdraw explicitly rather than just disappear.
After you decide which are the projects that you can and must stick with, go to the leader(s) of the others - in person - and tell them you must withdraw. Tell them your "youthful enthusiasm" overcame your good sense and you are in danger of shortchanging everyone's expectations and that you value their work too much to get in the way at this time.
A good time to be a bit honest and humble.
But this sort of thing has to be done face to face. Email isn't going to be a good vehicle. If face to face is actually impossible it is much harder. In such cases you may want to offer, in email, say, to stay connected but to take a lesser role. That is the sort of thing that would likely come up in a personal meeting in any case.
---
Note that you don't have to be a youth to have "youthful enthusiasm". See, for example: [<NAME> age 73](https://torontosun.com/2015/10/18/paul-mccartneys-boyish-enthusiasm-shines-in-toronto-show/wcm/da168416-6894-482b-b3e5-5cf35facb998)
Upvotes: 4 [selected_answer]<issue_comment>username_2: When I moved to my first faculty position, my advisor told me *your time is your most valuable resource, be very careful how you commit it*. It has proven to be very true, especially since I had my baby. As a young academic I drowned myself in commitments to a point I almost had a burnout.
I will advise you differently than the rest:
* If you have committed explicitly to deliver something, then you have to deliver that. Your “street credit” is on the line. As a young academic, and still doing a postdoc, the last thing you need is for people to think you bail on your responsibilities.
* If these are side projects, meaning just some group emails or initial discussions, then just “let them go”. If people actually need your input, they will ask you explicitly. If not, then your better not meddling.
Upvotes: 0
|
2019/05/07
| 1,184
| 5,174
|
<issue_start>username_0: I submitted my dissertation after several years of working. I will have my defense soon, and I reported what I am going to talk in the defense. However, my primary supervisor now refuses to give further advice or comments because he said he had spent a lot of time editing my dissertation. In addition to that, he also asks me to refrain from interacting with my second supervisor because the latter person also spent a lot of time for my doctoral dissertation.
I am not on good terms with my primary supervisor. He is fed up with me because it was very hard for him to supervise me. With my second supervisor I am on good terms personally, to the degree that he likes to invite me to have some coffee. However, since it seems that he is reluctant to advise any people, he might want to cut ties with me as well.
My question is, how I should interact wisely with these people, especially after my graduation. I do not need their advice for the moment because I can prepare for the defense by myself. However, I may be going to feel awkward if they will cut academic ties with me. I am not sure what exactly will make me so; however, I think I should not cut the ties because I see that most PhD holders around me still interact with their former supervisors.
For your information, I am an international student from Asia in a European country and going to work at a university in my home country after getting a doctorate. My supervisors are leading scholars in our small field in humanities. So, I will have some opportunities in which my papers will be reviewed by them.<issue_comment>username_1: Sounds like their issue is that they don't want to spend more time on your defense. They don't have an issue with talking to you in general. So just prepare your defense without asking for help from the supervisors which don't want to help. After you graduate, feel free to write back as you normally would. Just not about the defense.
Just as a remark, I don't get how your other supervisor wants to help you but your main supervisor doesn't want you to ask his help. Presumably the other guy is a free adult, and can make his own decisions. If he agrees to help you, why would your main supervisor object? It's not taking any of his time, and if your defense is better and makes you more likely to graduate, that's good for him too.
Upvotes: 2 <issue_comment>username_2: My advice is to concentrate on your defense and on finishing your Ph.D. You don't want to mess that up, want to get the process done and move on (you with your credential and them with you out of their hair). Don't spend time now, debating post-graduation interaction policy. Concentrate on the task at hand. The defense.
Once you are out of the nest, you (and they) can decide how to interact. Nothing stops you from stopping to interact with an advisor that you clashed with or to interact with his colleague (if his colleague wants to). But that will just be a future interaction for whatever it's worth.
Right now, you need to take control of the task at hand get the ball over the goal line. Take some ownership.
Upvotes: 0 <issue_comment>username_3: I think it's possible you've been asking for too much help on revisions rather than your supervisors being "difficult" per se.
It doesn't sound like your supervisor fits the standard formula for an uncaring advisor if they've already given you a lot of assistance: it sounds more like they are pushing you to be more independent and have perhaps chosen a strong message when prior gentle messages were not received. Their suggestion for you to not ask your secondary advisor for more help may just be another cue that they are pushing you to be more independent rather than cutting you off completely.
Of course neither of you are necessarily in the wrong here: it may just be that your advisor prefers students be more independent than your preference. If that's the case, but you've still managed to work together up until now and you can produce a suitable thesis for graduation, congratulations: you've overcome something that not all students will overcome.
Unless there is information missing from your question, I don't think you need to worry about losing academic contact. Focus on preparing independently for your defense.
Upvotes: 2 <issue_comment>username_4: I tend to agree that your supervisor is pushing you to finish things up independently. There is nothing wrong with that and can happen, especially when students have been wringing their hands about getting their dissertation "just right". They might be letting you know that you need to finish up and get everything over with. No more edits.
But this is hard to judge without context.
I can tell you, if you had a supervisor who was adversarial, you would definitely know it.
Also, when you finally graduate and your supervisor congratulates you, things will feel very different. The stress of graduating will be gone and you will see clearly again. There is a good chance that you will not feel what you are feeling now when you graduate.
My best advice is just to listen to your supervisor, graduate, and move forward.
Upvotes: 0
|
2019/05/07
| 215
| 843
|
<issue_start>username_0: If someone writes a letter to the editor and I write a response, how should I list the response this in my CV?
It *is* peer reviewed but is only a 300 words, so it doesn't seem right to put this under 'journal articles' (though the title starts with "Reply To: ... ", which makes it clear).<issue_comment>username_1: Rename your "Journal Articles" section to "Publications". Put under "Publications".
Optionally you can have an "Other Work" section where you list less-important stuff. Throw it under there.
Upvotes: 1 <issue_comment>username_2: I put these on my CV as peer-reviewed publications and label them as 'commentary'. I also label other works 'review', 'empirical', etc. as appropriate. The title can also be used to signal the type of work, like "[Title]: Commentary on Zeller et al., 2018".
Upvotes: 2
|
2019/05/08
| 1,341
| 5,641
|
<issue_start>username_0: I'm wondering if it is possible to find out if someone is graduated from a University. Like, I could write on my CV that I graduated from Harvard, but then how can you prove it? Suppose I'm an excellent hacker, I could still fool people by doing clever tricks that make a counterfeit of a diploma or something. So I'm wondering if there is an official protected and verified database that records the information that a person is a graduate from a specific university?
Suppose that I'm an employer and I want to verify my candidate is a graduate of X university, how do I do that? What if the university is in Canada?<issue_comment>username_1: Harvard has these records, of course - they have the student's transcript, showing all courses taken, grades, degrees earned, etc. But they are confidential under US law and are released only with the student's permission.
The student can request that an official copy of the transcript be sent to anyone they choose. Traditionally, this is done by mail, on tamper-resistant paper with an embossed seal. Electronic solutions based on digital signatures are starting to appear but are not yet the norm.
So if you have a degree from Harvard, and you wish to prove it to some person X, you may ask Harvard to send a copy of your transcript to X.
If you are an employer, and you wish to verify that your employee has a degree from Harvard, you ask your employee to ask Harvard to send you a copy of the employee's transcript. You can make it a condition of their employment that they do so, so that if they refuse to request the transcript, you fire them.
Upvotes: 2 <issue_comment>username_2: Information on awarded degrees is considered publicly releasable [directory information](https://www2.ed.gov/policy/gen/guid/fpco/ferpa/mndirectoryinfo.html) in the US. Dates of attendance, for those who did not complete a degree, as well as honors (which imply a certain class rank or GPA) can be released as well.
Most schools sell this information to an aggregator, the National Student Clearinghouse. For a fee you can [verify a degree](https://secure.studentclearinghouse.org/vs/Index) by providing the student's name, date of birth, and school.
Harvard, specifically their school of Arts and Sciences, [directs third parties to this database](https://registrar.fas.harvard.edu/certification-transcripts-student-records/certification-enrollment-and-degree-verifications).
Laws in Canada differ and require signed permission from the student. A similar aggregator, [AuraData](https://www.auradata.com/) exists.
Otherwise, the student can have the school's registrar send a document, on security paper and sealed, listing awarded degrees to interested parties. This is called an official *verification* and is essentially a transcript that does not contain course or grade information.
Finally, if the person has a graduate degree that involved a thesis, one can check the school's online library catalog to see if one is listed.
Upvotes: 3 <issue_comment>username_3: People [lie about degrees](https://www.marketwatch.com/story/5-big-shots-who-lied-on-their-resumes-2014-09-18) all the time, and often only get caught years after, by coincidence. Once upon a time it was common to ask for the actual diploma, which while not unforgeable, is beyond the average liar's ability and/or motivation. These days, it's easier to just call the university and ask them to verify. But most companies don't even do that.
However, in practice it's not such a big loophole:
* Many employers already ask for references. That's to get some second opinion on what sort of person you are, but it also demonstrates that you probably did go to that school.
* Degrees themselves don't carry much weight. You will be asked about projects and internships. Those are verified in obvious ways and indirectly verify your education.
* Companies will often employ some private agency to do background checks. This will easily reveal whether you actually attended a school. They can check both government databases as well as ask the university itself. They can also check social media which can often be revealing.
* It's very risky to lie about degrees. You'd be living your life in fear of being discovered, and even if you go for decades before being caught, it could ruin your life and undo all the undeserved gains you accumulated up to then.
* If a degree is required for a position, chances are you need the skills taught by that degree to function at work. Even if you lie and get hired, you will soon be in dire straits when you cannot keep up with the work. Maybe you are just a genius who can keep up - but if you are such a genius, how come you didn't just get the degree in the first place? Of course, sometimes exceptional circumstances can put people in weird situations, but typically "excellent hackers" will put two and two together and just enroll in a college so that they don't have to lie, don't have to worry about being caught, and also get all the various benefits provided by going to school.
Certain professions have professional associations, such as medical doctors, psychologists, teachers, accountants, lawyers and engineers. These tend to have a sophisticated certification board that goes to lengths to verify its licensees. For these professions, the degree itself is kind of irrelevant, because employers will instead ask for your license, which can be readily verified through the professional organization. The organization in turn will not be easily fooled, among other things they often have good connections to the universities themselves.
Upvotes: 2
|
2019/05/08
| 666
| 2,778
|
<issue_start>username_0: I am part of a collaboration (i.e., researchers from different universities, not formally tied to a grant or deadline). We all met for a few days to develop an idea, and it seems quite promising. However, I did not feel like I contributed much, and since the project is a bit outside my expertise, I am not sure how much more I can even contribute. (Of course, I can learn from them, but I am not convinced that that is a good enough reason to be apart of the project.)
Should I quit the collaboration, so as not to "get in the way" of the others? Or is it worth staying on, even if my main contributions are high-level (e.g., organizing shared files, adding code descriptions, sending "check-in" e-mails, etc.)?
EDIT: Some people have linked me to [my own question](https://academia.stackexchange.com/questions/130225/how-to-tactfully-decommit-from-projects), which I think is different in nature; here I am asking about how to handle the dynamics within a particular collaboration. Perhaps the underlying question is really, are high-level contributions worthwhile? Also, I have spoken with the collaborators, and they are fine with me on the project, but I wonder if they are just being nice or political. So maybe there's imposter syndrome involved as well.<issue_comment>username_1: >
> I am part of a collaboration [that] met for a few days to develop an idea, and it seems quite promising. However, I did not feel like I contributed much, and since the project is a bit outside my expertise, I am not sure how much more I can even contribute...Should I quit the collaboration, so as not to "get in the way" of the others? Or is it worth staying on, even if my main contributions are high-level (e.g., organizing shared files, adding code descriptions, sending "check-in" e-mails, etc.)?
>
>
>
You could share your concerns with your collaborators and ask them whether they want you to continue collaborating. You might discover you've already contributed more than you've realised. Regardless, you'll be making it clear what you can offer moving forwards.
Upvotes: 3 [selected_answer]<issue_comment>username_2: Almost all academic journals explicitly demand in their submission guidelines that all listed authors should have made a substantial intellectual contribution to the work. Organising shared files, commenting code, and sending check-in emails cannot qualify as a substantial intellectual contribution, in my opinion, and does not warrant co-authorship by most academic standards. Actual ideas to solve the problems at hand, or elaborations on your collaborators' ideas would.
Whether or not you are likely to contribute such ideas in the future, and therefore if you should stay in the project, is for you to decide.
Upvotes: 1
|
2019/05/08
| 768
| 3,130
|
<issue_start>username_0: I am familiar with writing the citation part (in different styles). However, I have typically just used superscripts like Wikipedia to reference the citation, like this[1] sort of thing. I would like to get more advanced with it to make it easier for the reader to glean more information from the citation reference.
So I'm thinking something like this (Glen89) where author is Glen and year of publication is 89. But then what if there are multiple publications in 89 for Glen, then it would be perhaps (Glen89a) and (Glen89b).
But that isn't too descriptive. I have seen some cases where they list the publisher as well, like (Glen: MacMillan 89), but that is too long and specific to books.
In general I'm wondering the rules for how to do this, and wondering where I can get some inspiration for how to write these, if not a straight answer here on what a good style is.<issue_comment>username_1: Use an existing system, e.g., LaTeX/BibTeX (the `alpha` style produces *[Glen89a]* and *[Glen89b]*) or Word/Endnote. See <https://www.overleaf.com/learn/latex/Bibtex_bibliography_styles> for examples of different BibTeX styles.
Upvotes: 0 <issue_comment>username_2: [This webpage](http://www.cs.stir.ac.uk/~kjt/software/latex/showbst.html) has examples of different BibTeX styles; perhaps one of these would be of use.
Upvotes: 0 <issue_comment>username_3: Don't reinvent the wheel: There are existing systems that collect bibliographic data and then produce the appropriate citation in the text. Bibtex and Microsoft Endnote are the most widely used ones. *How* the citation in the text appears is controlled by "style files" that are often different between journals, but every journal has its own preferred style that you should use.
Upvotes: 3 [selected_answer]<issue_comment>username_4: The other answers mention software, but in fact citation styles are not ad-hoc systems imposed by software but there exist citation styles developed to define them. These are usually part of a general style. Common styles include [AMA](http://library.nymc.edu/informatics/amastyle.cfm) (number reference) and [APA](https://en.wikipedia.org/wiki/APA_style) (author-date reference). There are many other styles with variations on details like format of the bibliography or the sorting of references, but generally I think the reasonable options are either numerical reference (such as AMA) if you'd rather have brevity, or author-date if you'd rather have more readability (author-dates are easier to remember than numbers). Many programs will indeed implement such styles for you.
Usually whatever you are writing for will already require a specific citation style. I would figure this out. Even if you are writing for your personal use only, it's a good idea to pick a style or journal and stick to it anyway. That way recycling your notes for publication will be easier, and you will also get used to an important style. Journals will usually describe their required style in submission guidelines. Courses will say it in syllabus or assignment text, but if not, feel free to ask the instructor.
Upvotes: 1
|
2019/05/08
| 996
| 4,126
|
<issue_start>username_0: So I've been doing my bachelor's thesis. Alongside my bachelor's thesis, I've been taking a course that trains students in working in teams for projects. Let's call this 'course X'. Coincidentally, the project that I'm working on in course X happens to be very similar to my thesis topic. So, in the Future Work section of my thesis, I wrote about the project that we're working on in course X. Except I didn't mention that I'm working on the project. Rather, I mentioned that this project is an example of something that can be worked on (since it has similarities with the concepts implemented in the thesis).
Now, I recently passed my thesis. However, my examiner is oblivious to the fact that the project that I mentioned in the Future Work section is something that I'm actually working on in a different course. And this is stressing me out. Soon, we will be writing a sort of report on our project. I'm worried that my examiner will find out about this project someday and accuse me of lying in the Future Work section, since a project that I'm working on doesn't really count as 'Future Work'.....but does it? Should I be worried about this? Because I'm really worried about this. Should I tell my examiner or supervisor that the project that I mentioned in the Future Work section is something that I'm actually working on in course X?
And for the record, that's not the only thing I mentioned in the Future Work section. I mentioned other potential areas of research in that section, but these aren't bothering me as much. Am I overthinking this?<issue_comment>username_1: >
> Am I overthinking this?
>
>
>
Yes.
>
> my examiner is oblivious to the fact that the project that I mentioned in the Future Work section is something that I'm actually working on in a different course.
>
>
>
Your examiner probably isn't oblivious: Many researchers discuss other works in their future work section, without mentioning that they are already working on them.
>
> I'm worried that my examiner will...accuse me of lying...since a project that I'm working on doesn't really count as 'Future Work'.....but does it?
>
>
>
A project that you are working on is future work, in the sense that it hasn't appeared publicly yet.
It can be useful to discuss future work without explicitly mentioning that work is already in progress, since such work may never be published, due to unforeseen circumstances, for instance.
>
> we will be writing a sort of report on our project.
>
>
>
You can cite your bachelor's thesis in that report.
Upvotes: 6 <issue_comment>username_2: You are really overthinking this. Take a deep breath, and don;t worry about it. Future works refers to which kind of directions new research can take, given what you have presented in the paper. This includes projects that you have already started as well as things no one has done before. If there is a finished project, then it should be mentioned as a part of the literature review, but it doesn't sound like you have a manuscript ready from your other project.
If you are really concerned, you can speak to your supervisor and see if there is an option to add some errata to the thesis. At my institution, small errors and updates are corrected/added to by a sheet of paper (or three) titled Errata which contains the corrected or new information from between the submission and the actual defense.
Upvotes: 4 <issue_comment>username_3: The usual understanding of *future work* is "possible extensions of the presented work". So what you are doing is in no way problematic, one could even argue that it is actually a good thing: you actually are investigating the further possibilities.
Upvotes: 5 <issue_comment>username_4: You could even argue that this is rather the norm than the exception. Later on, you will finish your work on a paper, submit it to a journal or conference and wait for the reviews and editors decision (which often takes some months or even a year).
So when the reviewers read the paper, it is rather likely that you are already working on an extension of your work.
Upvotes: 3
|
2019/05/08
| 730
| 3,214
|
<issue_start>username_0: I am an undergraduate who is expecting a mediocre grade in my optimization class this semester (roughly B-), which is in significant contrast to my near straight A transcript. Unfortunately, I want to do a PhD in machine learning and so I expect this grade to be viewed in a negative light by graduate admissions committees due to the relevance of the material and the aforementioned contrast. But I will take an advanced graduate level machine learning course this coming semester which has the optimization course as a prerequisite. Will a good performance in this graduate level class negate my mediocre grade in my optimization class? In general, can (and will) good grades in more advanced classes negate bad grades in prerequisite classes in the context of graduate admissions? To what extent? I understand that each admissions committee will view this situation differently. I am looking for a general opinion.<issue_comment>username_1: Actually, grades don't negate other grades. Your record is what it is. But people are reasonable about evaluating your history in most cases. People evaluate your application based on whether it shows indicators that you are very likely to be a success, not whether or not you are perfect in every way. Very few people can claim perfection.
A single bad grade is, in any case, very unlikely to have much if any negative effect. In any grad school application you have to make your own case that you will be successful. There are many ways to do that. GPA is only one and individual course grades matter even less, assuming that your record indicates positive outcomes.
Make sure you get good letters of recommendation, of course.
Upvotes: 2 <issue_comment>username_2: >
> In general, can (and will) good grades in more advanced classes negate bad grades in prerequisite classes in the context of graduate admissions?
>
>
>
Only to a very limited extent. Getting a C (or F) in calculus during Freshman year is likely to be completely ignored if you got straight As after that. Other than in such extreme examples, no, I would not expect to be able to negate past grades. But of course, each grade is only one data point, and additional data points can significantly improve your overall record.
>
> roughly B-, which is in significant contrast to my near straight A transcript.
>
>
>
So, this is an outlier. Like in any other sort of data analysis, a single outlier in an otherwise homogeneous record is unlikely to have much/any effect. Sure, getting a good grade in the more advanced course could make it even more clear that this outlier was not related to the subject. But it seems like you are already on track to get put in the highest "tier" with respect to GPA, so you may want to consider investing time on the other facets of your application.
Upvotes: 2 <issue_comment>username_3: Your chances can still be good. The key thing to realize is that this happened, and to work on optimizing your other skills and accomplishments, rather than trying to fix or revisit previous events. PhD admittance is based on many characteristics, including overcoming adversity, taking initiative, and learning from mistakes.
Upvotes: 0
|
2019/05/08
| 2,811
| 11,988
|
<issue_start>username_0: I am a professor at a US university, and I'll be away on a sabbatical for all of the coming academic year. I would like to sublet my house while I am away, and I think that one or two of the graduate students in my department might be interested. I'd prefer to do that instead of renting to a complete stranger. However, since I would effectively be their landlord and they would be paying me rent, I am wondering if this raises ethical or conflict-of-interest concerns, since those students may be in my courses in the future.
(I have checked my university's regulations and they do not address this situation. I would certainly check with my superiors before going ahead; this question is just to find out whether this seems okay in terms of general professional ethics, or presents the appearance of a conflict.)
Some possibly relevant notes:
* My department's graduate program is not in my research area, so there would not be any chance of me becoming the dissertation advisor of any of these students. However, I could serve on their dissertation committee.
* The sublet would end before I would be teaching classes again, so I wouldn't simultaneously be a student's landlord and their instructor. It is conceivable, though, if they fell behind on the rent, that they might still owe me money at that time.<issue_comment>username_1: The rule of thumb for conflicts of interest is: "If you think there might be a conflict of interest, or wonder whether there might be, then there is." That's because the *perception* of a conflict of interest is just as bad as the conflict of interest itself.
So yes, whether you feel conflicted or not, other people might believe that you could be, and that's just as bad. A better choice would likely be to rent your house to grad students in other departments.
Upvotes: 4 <issue_comment>username_2: If there is no supervisory relationship and you won't be teaching them while you are gone, I see no issues with this other than the possibility for misunderstandings over money.
One way to avoid some of the problems is to work through an agent as long as that doesn't restrict you to offering it to the public first. It also helps if the student has to leave mid term and you need to find another tenant quickly.
Another issue you should be sure to deal with is a damage deposit. Again, working through an agent would help avoid conflicts. A rental contract is a good idea as it can lay out expectations explicitly.
Upvotes: 3 <issue_comment>username_3: At the time they sign the lease there is no conflict of interest. Further, there is no conflict of interest anticipated throughout the duration of the lease. There is of course the possibility that an unforeseen conflict arises, but this is always a possibility. You are putting yourself at an elevated risk of a conflict of interest, so it is probably worth mentioning it to you department chair. If an unforeseen conflict does arise, just let your department chair know and do what ever is needed at that point.
Upvotes: 6 <issue_comment>username_4: If you are not actually working with the student, there shouldn't be much of a conflict. There's a slight caveat in that if the renting student annoys you, it could potentially damage the student's reputation among faculty. But then non-professor landlords might happen to know some faculty too, and most universities expect students to conduct themselves well with people unaffiliated with them.
I would have noted that it can be difficult to predict that you won't interact with the student. Courses and committee memberships can sometimes be surprising. But you say that you will be on sabbatical the whole time, so sounds like there's not much of a risk in your case.
Still, there might be an alternative that largely takes care of all such potential concerns. Simply ask another, wholly unrelated department if there are any interested students. The tenants you get will still be trustworthy as students of your university, but nobody could reasonably claim that there is a conflict of interest if they are not even in your department.
>
> I have checked my university's regulations and they do not address this situation.
>
>
>
Do you mean that you asked them, or that you read their policy documents and found no clause covering this? If latter, it might be worth trying to ask the HR department if your university has one (in addition to asking your supervisor like you already intend, which should also be fine).
Upvotes: 2 <issue_comment>username_5: *Advertise at other departments.*
If you are a professor in say physics, then advertise at the departments for social studies, law and what not.
If a student-tenant studies a subject completely different than your subject, then a potential conflict of interest is very unlikely.
Upvotes: 0 <issue_comment>username_6: I agree with @strongbad that there is no conflict of interest right now, it is all about the possibility of one arising in future - which may not happen at all.
In the case of the students taking a course with you, I'd say attending your lectures is not the point of trouble. Possible conflict of interest arises only in the context of the course exam/grading. But then actually in various ways: in case of a negative experience with the rental you may be biased against them and/or they may be afraid of this. *Other students* may be afraid [jealous] that you prefer them (more with a positive rental experience, but in practice probably independent of how/whether the rental experience works out).
I'd say the overall level of conflict of interest would be similar to e.g. a colleague's child or a relative of yours attending one of your courses (which is maybe a scenario for which established ways/guidance is available)
Depending on how your courses are graded, there may be comparatively easy ways to resolve any conflict of interest in case it arises:
* In case of oral exam, maybe a colleague could examine them.
+ When I was a student, for the important oral exams a number of professors could and would do them (say, phyiscal chemistry -> any of the professors for physical chemistry).
+ I once had a student in a minor exam (final oral of a labwork course) whom I failed twice. After the 2nd time, I asked a colleague to do the exam to make sure it was not just me being unreasonable or biased.
+ Who's your backup if you fall ill during exam time?
+ There's a bunch of possibilities that are in between you with possible (perceived) conflict of interest and someone else doing the exam: having someone taking detailed minutes of questions and answers (that was anyways the case for all important exams).
+ Oral exams were semi-public: other students could listen to the exam unless the student to be examined asked to be examined in private (results were always given privately). Under such rules, you could encourage any student that may perceive a conflict of interest against them to bring a fellow student as a witness to make sure there's fair play.
* In case of written exams (or homework), over here the grading usually isn't done just by the professor. Instead, many people of the professor's group are involved. If that's the case, you could either not participate in the grading or grade yourself only questions that don't give any leeway. As always, detailed grading schemes help.
* (computer-graded exams are trivial in this respect)
In any case, I'd discuss this with the dean and also state this to the students (in particular, "as a consequence of this lease, you'll have take your final oral exam with a colleague, but not with me: that could be perceived as a conflict of interest").
I tend to trust people who worry about possible conflicts of interest beforehand and take measures to avoid them much further. IMHO it's the ones who don't worry and aren't aware of whom one needs to be wary...
---
All that being said,
* there may be other people/students whom you know equally well through other channels like sports club, religious group/church, ...
* if you do a sabbatical somewhere else, there may be some other professor coming for a sabbatical to your university. Whom you may not know, but in case you consider the population of professors on sabbatical as sufficiently trustworthy in general that may be an option. Plus, that'd be a population that suits very well in terms of time frame of the lease.
Upvotes: 1 <issue_comment>username_7: There's no conflict -- now. That doesn't make it a great idea. Let's say you go ahead and do this, and the rental goes well. Now, let's say your department needs someone to serve on the examination committee of the student. There would be an argument to be made that you have a conflict with that student, having received thousands in income by way of that student. The conflict might be an inconvenience for your department, even if you do the cautious thing and recuse from the examining committee, as they have to dig up another faculty member who might not be as appropriate.
Again, nothing wrong, but you might not be so happy with the decision to do this in the future. Personally, I would post the sublet opportunity, and see if I can attract somebody from a different department. If I got into a bind, and couldn't find someone, I might look into the department.
Upvotes: 2 <issue_comment>username_8: Bad idea. Don't fish off the company pier. To start with, grad students are not ideal rental customers. Then you complicate any possible future confrontations with being work colleagues.
Just go through a rental agent and have them handle it (maintenance too, you don't want the phone calls when the toilet breaks). You may make a little less money but with much lower risk and hassle.
Upvotes: -1 <issue_comment>username_9: You want to sublet your house, namely you decided to become a supplier irrespective of whether graduate students were interested or not.
The existence of interested graduate students present an opportunity to you to mitigate the transaction risk. This may not have a direct monetary value, but it is a financial gain nevertheless, in expected value terms: say, lower risk of default or of destruction of assets.
In other words, you are thinking of entering in a supplier-customer relation with a student of your department, because it increases your financial gain.
Once you do that, concurrence or not of professor-student relation with the supplier-customer relation does not matter.
That there was past financial gain will always be a potential factor of corrupting the ethics of the future professor-student relation, or even of the "colleague of a professor who is in a professor-student" relationship with the student that used to be your tenant.
And conflict-of-interest is a matter for the courts, while *potential* conflict-of-interest is what interests us here.
Upvotes: 1 <issue_comment>username_10: >
> I am wondering if this raises ethical or conflict-of-interest concerns, since those students may be in my courses in the future.
>
>
>
I believe you're asking the question wrong. Try:
>
> I am wondering if this raises ethical or conflict-of-interest concerns, **which would be difficult to address** since those students may be in my courses in the future.
>
>
>
And then the answer is no. As others have indicated, no foreseeable conflicts during the rental period; and later on - it will not be much different from when you teach a class and one of the students is, say, the child of one of your neighbors, or a friend of a family member, etc. These things happen and are dealt with reasonably. For example, perhaps you might have some teaching assistant handle their grading; or you would find someone impartial to double-check your evaluation of them etc. It's not a big deal IMHO.
Go ahead and sublet your house to them. And - don't overcharge the starving graduate students! Their salaries ("stipends"/whatever) are probably super-low.
Upvotes: 1
|
2019/05/08
| 318
| 1,376
|
<issue_start>username_0: Are their any requirements to publish an article in a journal without a degree in the field? If so, what are they?
I was curious if people in general can publish in an journal when they have not obtained a degree in the field.<issue_comment>username_1: Yes, you can publish provided that the editor and the reviewers accept your paper, perhaps after revision. There are no "credentialing" requirements to publish in a field.
Some people are just self taught and rise to a high level. It is true, however, that the reviewers may look at your lack of degree and decide to be extra vigilant. But they should be vigilant with new degree candidates as well. Actually they should just be vigilant, of course.
There are some fields, however, that in some places you need to be careful about. But that is more about how you present yourself than what you write. For example, in some places it is illegal to call yourself an "engineer" without a degree and, perhaps, a license.
Upvotes: 2 <issue_comment>username_2: No editor will care (or for that matter could know) about your degree. What matters is the contents of the submission, and if it meets the standard of the journal. There are plenty of chemists or electrical engineers who publish in physics journal, plenty of physicists who publish in chemistry or engineering or math journals.
Upvotes: 3
|
2019/05/09
| 1,514
| 6,459
|
<issue_start>username_0: I am a newcomer to conferences and have tried going to a few to get ready for graduate school. I attended an industry conference (the industry is known for its proportion of graduate-educated workers, and collaboration with academia) sponsored by a company I've been doing contract work for. As they paid for my registration, my badge had their name.
Long story short, there was a panel session about priorities in manufacturing in this industry. Since I heard about a material that has attracted interest in R&D of this specific community, I asked the panel what their thoughts were about this material. At the time, I felt it was a relevant question since the panelists were industry executives who I thought came across mention of the material. However, only the moderator answered my question, and the panelists just gave me a highly judgmental, silent glare. I felt really bad about it since. Doesn't help I said my affiliation, and the room contained nearly all of the conference attendees.
I looked online and saw people disapprove of questions that make it seem like the person asking is smart. I wasn't trying to do that; I was just wondering if the panelists had any thoughts about that research topic. I realize that discussion of that material was probably a highly academic, trivial topic.
Does this happen in conferences? Or did I just make me and the company sponsoring my attendance really bad?
Sorry if this doesn't belong here, I can move it if need be.<issue_comment>username_1: I wouldn't worry about it. People ask all sorts of questions at conferences and no one in their right mind would conclude anything important from a single question you asked. Usually conferences are a blur as you try to take in too much information in too little time. There's just no time to waste on pettiness like remembering someone's embarrassing question - you're too busy remembering interesting research that actually matters to you - not to mention advertising your own work and generally networking. Normal people don't go to conferences to find someone to laugh at. Nobody will remember your "embarrassing" question in a few days.
Sometimes speakers will not like a question. Maybe the question sucks or is dumb, maybe they just have some personal hang up. It's silly to expect random conference attendees to know every speaker's personality and pet peeves so the latter case is unlikely to reflect badly on you. If your question sucks - oh well. Try to do better next time. If you feel it sucking as you're asking, make sure you at least keep it brief to avoid boring people.
Generally I try to phrase my questions in a very conservative, mild way so as to avoid any embarrassment from misunderstanding. Even if I feel I know exactly what I'm talking about and it is tempting to be very direct and pointed, I still ask as if I'm unsure of myself. I also qualify with something like "Sorry if I misunderstood something, I may have missed a few points in your talk" to give the person an out in case they don't feel like answering. If I wanted to publicly embarrass a speaker with my question, this is surely not a good strategy. But then I haven't felt the need to do that since... Oh, freshman year of college. As far as actually learning, this method of asking questions has served me well: People who have something interesting to say will ignore your qualifiers and address your point even if you did not demand them to. And if they have no interesting answer to give, you don't gain much from putting them on the spot.
Upvotes: 4 [selected_answer]<issue_comment>username_2: If the question is "do most conferences have a few bad questions", the answer is certainly yes. People are likely to forget bad questions, more difficult to recover from a bad talk. I've been in quite a few conferences where someone (often a student without adequate guidance) would give a bad talk and everyone would be talking about it for days. I've also been to conferences where most of the talks were thoroughly forgettable and you would have to light the podium on fire to attract negative attention. I don't ever remember hearing people complain so vehemently about a bad question, only a bad talk.
The main "bad question" is one that is too basic for the audience ("what do you mean by *calculus*"?) or reveal a basic lack of understanding ("could you address heat dissipation by using logarithms?"). Questions that are too technical are rarely a problem unless it's way off-topic or takes up too much time.
Upvotes: 3 <issue_comment>username_3: I have probably attended close to 50 conferences or conference-like workshops (mostly Computer Science, but lately also a few in disciplines like automotive or education).
>
> Does this happen in conferences?
>
>
>
Not really, at least not in the conferences that I generally attend. People do occasionally ask questions that indicate a certain unfamiliarity with the subject matter, but I have never seen the panelists give the asker a "highly judgmental, silent glare" or refusing to answer the question altogether. Under the assumption that your question was not monumentally stupid or offensive in some other way (and I have a hard time imagining this in your concrete scenario), I think the Occam's Razor explanation is that you are misreading the situation.
It's well possible that your suggestion was rather impractical, and hence the panel did not want to dwell on it for long (especially if time was already running out), but I cannot imagine that it was really nearly as bad as you make it out to be.
>
> Or did I just make me and the company sponsoring my attendance really bad?
>
>
>
I would be surprised if the largest part of the conference remembered your question, let alone your affiliation, even in the evening of the same day. Conferences are long, many people ask many questions, and most of them are not particularly noteworthy. Again, unless your question was outrageous for some reason, there is nothing to worry about.
>
> I looked online and saw people disapprove of questions that make it seem like the person asking is smart.
>
>
>
What people mean with that are "questions that aren't really a question". The ones were you are not looking for an answer, but are just trying to educate the speaker / audience. An honest question, even one that was maybe not directly relevant to the panel, does not fall into this category.
Upvotes: 2
|
2019/05/09
| 1,942
| 8,460
|
<issue_start>username_0: I recently presented a paper in a top-tier conference in a computer engineering field. I did that work as a research assistant under a Professor but now I am working in the industry. Two different researchers, let's call them John and Sam, wish to work on an extension to this work. But both need help from me since they feel I will be able to solve their problem quickly based on my experience. John is a PhD student with whom I had worked with and was the second author of the paper. Sam is a PhD student, in another university, who I met at the conference. Since both have a request for help and both are working on the identical extension of the work, who should I help out?
Background:
I originally planned to work on my paper alone along with my advisor. After I was done with about 80% of the work, I met John (who is a Phd student under my advisor) and decided to collaborate with him because he too was working on a similar problem (he at that time was working on the extension itself). John didn't contribute directly with the work (I was sole author of all code and did all analysis with help my the advisor) but he helped me with conference selection and writing of the paper. As John was sponsored by a company, having his name as an author meant that now all the ownership of the work laid with the company, at least that was what I was told by John and the advisor. This meant they had all the rights to the code (though I never signed away my rights explicitly). My advisor is particular about legal issues and hence don't want me to make the code open-source. But since the company was not interested in using the code and I didn't want my work to go to waste, I have made the code open-source, unknown to my advisor or John.
Question: I wish to maximize the use of my work (since it is my first research work). So should I continue to help both John and Sam who are both working on the same topic? My concern is that one of them will have wasted their time if the other is successful in publishing a paper first.
I am more confident about Sam's skills (based on his past papers and John doesn't have much experience in this field) but John has indicated to me from the start that he was more interested in the extension than the work that I did in the paper.
My other option is to reveal to both of them about each other so that they can possibly collaborate. But since I have told Sam that my code is open source, Sam can possibly tell John this information which might irritate my former advisor. I would like to avoid this because I might require future favors such as a letter of recommendation should I choose to pursue PhD.
Edit: I have open-sourced the code via GitHub (changed the repository from private to public). I still haven't added any license (so legally by default it is not open-source as in free to use but anyone can browse through it) so I still retain all rights. Basically, I used the wrong terminology, the code is public (instead of open-source).
I haven't talked with any representative from the company, only my advisor and John have been in contact with them. From what John tells me, the company is no longer interested in the project and I think John's funding is also stopping because of this.
Based on the answers, I am going to ask Sam and John to work together (at least tell both of them about each other) and ask Sam not to reveal about the public code. I am still not convinced about making the code private again because this being a computer engineering field (applied field), just based on paper, it will be hard for anyone to replicate the work (the evidence is that both John and Sam need my help).<issue_comment>username_1: This open-source issue is really knotty. My first concern is your legal footing -- make sure you do not have any legal liability for making that code open source. You may need to consult with a lawyer.
Second, John is an author of your paper. So, this is joint work, regardless of who did what. Now that it's published, anyone can do anything with the published work -- but anything unpublished (e.g., code) could be considered IP (see previous paragraph). Even beyond the legal issues, this is joint work, so leaving John to work on his paper while you help Sam scoop him is rather ethically questionable. Even if John and Sam were both total strangers, it would be reasonable to either put them in contact with each other, or tell one that you've decided to work with the other on the same problem.
Upvotes: 3 <issue_comment>username_2: **Work with both** John and Sam **on the same paper**. You all have something to offer, so pool your efforts and work together.
Upvotes: 7 <issue_comment>username_3: You should all do a collaborative study. John and Sam should have shared first authorship or Sam first and John second, with you providing guidance and consultation on the entire project.
If you are specific about the open-source nature of the code, discuss this issue with both John and Sam. Do this, before going to your legal advisors.
>
> As John was sponsored by a company, having his name as an author meant
> that now all the ownership of the work laid with the company, at least
> that was what I was told by John and the advisor.
>
>
>
I'm sorry but this seems very shady to me and I think John is playing you for a fool or John himself is a fool. If you are the author of the code, it is your code. If the code is present on a private repository (is it?) in the cloud, it establishes that you are the creator and contributor to the codebase. The contribution by author part in the paper establishes the contribution by both You and John.
Unless you sign some sort of agreement relinquishing your rights to the code, it does not become IP belonging to a third-party. Furthermore, you may have already signed away whatever rights you had to the code when you became a student at your university. Go back and check all the papers they made you sign back then. In such a case, the code belongs to the university.
Also, it is worthwhile to check the licenses of the dependencies you may have imported and also the licenses of the language that you used to write the code. In many cases, the licenses will stipulate (check Apache) that usage of this software means that you will have to preserve and maintain the license in your codebase.
Do all this before you speak with a legal advisor.
Upvotes: 2 <issue_comment>username_4: The "open source" issue would probably have been easily solved, except you have now made the situation worse by going behind people's backs.
I'm coming from the "industry" side of this sort of situation, and from long experience it's unlikely the company actually wants *the code*. They want the ideas expressed in it, to implement their own way.
John has apparently got the message the "IP is important" but he isn't the person in the company making the decisions, or signing off the sponsorship money! It would have been much better to deal with whoever *is* managing the sponsorship, and get a proper agreement in writing if necessary.
It's very unlikely the company would want to prevent the university doing further research, especially since it appears they are getting this code "for free" and weren't aware of it when they set up John's original sponsorship. In fact they might even sponsor more research.
But from the industry side of the table, the one thing that really p\*sses me off is when people start trying to play silly games with IP - and "putting something in the public domain" behind my back seems like exactly that, a silly game. If the IP "belongs" to the sponsoring company or to the university, that's fine either way. both sides can do business starting from that point if they are both willing partners. But when one partner suddenly takes unilateral action, that sends a message that they are *no longer* a willing partner.
In the short term - well, from an industry point of view you don't expect every research project to pay for itself, so if this one doesn't produce any *useable* end product (and code with an open source license might *not* be useable by the company!) that's not going to surprise anyone.
But in the longer term, if there are several universities who are *possible* places to sponsor research, and one of them has a track record of playing IP games, guess which one I'm *less* likely to recommend, next time around...
Upvotes: 2
|
2019/05/09
| 1,007
| 4,038
|
<issue_start>username_0: I will be working at a private company as an engineer (but in a heavily research oriented team), and I just learned the contact information of my manager. I will be sending an email to introduce myself, but I am not sure how to address him.
Just by searching his name online, I learnt that he works as a professor at a university (but is on a leave of absence, at least his page says so) but his current title in the company is engineer. So, in this case, should I address him as "Professor *surname*" or "Dr \_surname" in the email or just by his first name (which is the preffered way at this company)?<issue_comment>username_1: Not sure how it is in other parts of the world, but in my country *working* as a professor, and *having a title* professor are two different things. The title is given for scientific achievements, it is sanctioned by the president, and is life lasting, like a PhD. In such case, even retired a person is still a professor. Contrary, there is a possibility to be hired as a professor by a particular university, and then the person is a professor of that particular university. So it is a work title. When one changes works, one is no longer a professor unless also given such position by the new university.
So check if the person has a life long professor title. If not, if he only had such a position for some time, it would be out of place. Imagine your CEO worked as a plumber 10 years ago. Would you call him "Dear Mr. Plumber"?
But overall it depends on what is customary in the company. You might call him "Dear <NAME>", and he replies "call me Bob". Or points out he's not a professor, or they don't care about titles in the company, and tells you to call him <NAME>. You may ask a colleague who is already working in that company, or someone who looks nice enough to sincerely answer your question.
But in the end, it doesn't matter that much. It seems you found out he was a professor by yourself, you weren't given that information from the company. So I would just write "Dear Mr. Smith".
Upvotes: 0 <issue_comment>username_2: Go for Professor so-and-so, he can tell you if he prefers to be called something else and you've applied the maximum respectful title at the start.
Upvotes: 0 <issue_comment>username_3: If you're here in the US, I'd go with Dr. You're addressing him as your supervisor at a workplace, not at an academic institution. He's not a professor there. His title there is probably something like "Director of Research" which is definitely not how you'd address him. But he is a Dr. and, in my experience in 40+ years in industry, that is how he'd be addressed.
But also, at most universities here in the US, lots of instructors who do not have PhDs and are not on tenure track are still called professor. Like me. I don't have a PhD and my official title is lecturer, but students and others commonly address me as professor nonetheless. (And yes, it makes me a little uneasy because, to me, there is a difference and calling anyone a professor regardless of whether they really are seems to cheapen the title for those who've actually earned it, which I have not.)
Upvotes: 2 <issue_comment>username_4: If this person has earned their PhD, I think you are safe to address them as “Dr. Surname” in an introductory email. Since you are not working in a university setting, using “Professor Surname” feels out of place.
I think it would also be permissible to address the email “FirstName Surname” because that is all the information you were given by the HR department, and it seems likely that they will reply and sign using their preferred name. Along this similar vein, using “Mr. Surname” would also be acceptable, however I personally feel this is less formal (and I would err on the side of formality).
It seems likely that, as you’ll be working closely with this person, they will prefer to be called by their first name, so I would chart a middle course (“Dr. Surname,” “FirstName Surname”) in the introductory email.
Upvotes: 1
|
2019/05/09
| 3,794
| 16,809
|
<issue_start>username_0: I work in an interdisciplinary field. My input is not generated by myself, but by talented people that I trust and that trust me to analyse their data and generate fascinating insights.
But here I am, once more stuck with a project where the input is bad. There is no use for blame and finding a scape goat, we are in this together. And people learn. But I have been stuck in projects with big promises and bad/insufficient input since the start of my PhD. I moved to another place for a PostDoc now, but this situation seemingly haunts me wherever I go.
There is not much I can achieve when most of my projects stop after the input quality control. But if I want to stay in academia, I desperately need to step up my game in actual output, not just in my ability to troubleshoot, right?
How do I move on from this? What chances do I have if I can just never land a "prestigious project" that results in a valuable publication? Is there a chance to still build up a good scientific reputation without those? Should I try myself at writing a review? Take on projects until one finally works out? (But how long will I succeed in getting another job if they don't?)
I could work with published data for a bit, but it is often not comparable between studies and severely lacking metadata.
The question is, do I have to accept that success (as in, basically being able to stay) in academia is to a large degree based on luck and I am not one of the lucky ones or is there anything major I can do? I really love the work I do, I would like to keep going.
So far I have taken on additional projects, tried to do my own 'side projects' on at least roughly usable parts of the data in the hope I will find a better dataset eventually where this could come in handy and kept in touch with collaborators in an effort to troubleshoot and eventually produce better input.
EDIT:
To address some questions:
I produce analysis pipelines, partially based on my own methods. Without "real data" application it's hard to publish those in my field.
Yes, it is "real world data". I do not expect perfect data at all. But I do expect technically correct, usable data. If the input is random/to few features to be statistically relevant there is nothing I can do, though.
Imagine trying to do a statistical test on the similarity of blog posts based on word usage written by different groups of people but many "groups" only are represented by two authors, the text is sometimes just one sentence long and quite a few of the posts looking like they are produced by a random letter generator, not having any actual words in them. While I was promised at least 5 authors per group, minimum 5000 words per text and of course the post actually written by the author assigned to it.<issue_comment>username_1: In science we investigate the unknown. As such, it is impossible to guarantee that your project will produce positive results. If you know that your hypothesis is true beforehand, there would be no point in doing the project. However, if *all* your projects are turning out bad, that also sounds unusual. The best place to look would be peers and colleagues who work with similar things - is it really the case that they are just luckier than you? Or maybe they are doing something differently?
Almost any scientific project will have strengths and weaknesses. It is not necessarily your job to provide an exhaustive account of the weaknesses. Starting with reviewers and even before, there will be no shortage of detractors who will point these out. Selling the strengths, on the other hand, is something only you can do. If there are 99 features that have no statistical significance, it is not productive to get hung up on those. Obviously don't deny them when presenting your results. But the 100th feature that *does* have significance is the most interesting and bears mention, in addition to features whose insignificance by itself is striking. From there, more significant features can be uncovered.
As you get experienced with analysis you should cultivate a feel for good projects and bad ones. Poorly conceived experiments, lack of controls, experimenters known for carelessness, crazy hypotheses that have no foundation in literature, are all examples of giveaways for a project to stay away from. If you are backed into a corner and all of your prospective projects have crap data, then you can look outside your immediate collaborators. Successful analyses get made all the time so surely there is data out there that is not useless. As you mention, reviews are a good way to at least publish *something*, they can also attract new collaborators and help you understand the field better and get more capable at detecting bad projects. You can also try to re-analyze data from other researchers or papers.
Another option is to improve matters with your collaborators. Even though a negative result is not helpful for publishing, it is still useful information. It keeps them from wasting time on a red herring. If the data you get is bad after all, you should try to show this convincingly ASAP, so you can quickly get back to your collaborators and start coming up with a solution. In fact, you can use your experience of previous failures to guide them on study design and point out errors they might be making that gave you problems before. If too little is known about what it takes to produce good data, then projects should be small and quick, so you can iterate many times as you find the correct parameters. Big projects should be avoided until you're confident you've taken care of all the basic mistakes.
Upvotes: 0 <issue_comment>username_2: This is a common issue and it's why I recommend to get more into the weeds and to be more choosey about who you work with. (I basically always took leadership of any collaboration...then again this is easier as the synthesist...people like you are used to being support...then again, I checked their work and sometimes found issues...none of them ever bothered to ask me questions or learn/criticize my methods!)
Similar issues happen when people ask for PDE mathematical models and analysis, but the engineering assumptions are incorrect. You actually add way more intellectual value by "asking five times" and checking input quality, assumptions, etc. than by just cranking the stats machine or the diffyQ solver. Ideally you should try to be involved even with the study design.
One serious area you might look into is US oil/gas. There's a lot of interest in optimization, neural nets, big data, etc. Also, they have a lot of money. (Even when they say they don't, they do. Are used to paying a lot for services, travel, tools, etc.) The data is not always perfect, but they have experience with making do and dealing with missing points also. Of course you need to become more involved yourself in scrubbing, inspecting, correcting, etc. the inputs. But I don't think they will be put off by a questioning approach, only if you tell them to reshoot the seismic or get in a time machine and drill better test verticals in 1950. But I suspect your tools can still often add value even with imperfect data AS LONG AS the imperfections are known before the analysis starts.
P.s. Even the questions on SE often suffer from this. People ask for help with outcome X bounded by conditions 1, 2, 3. But they would really be better served by questioning of what their real output aim should be and of the restrictions.
Upvotes: 0 <issue_comment>username_3: If you are the data analyst/scientist/statistician, you **need to be aware of the limitations of your approach given the data you are provided**. If you do not have sufficient data, **you should not even run the analysis** - if you do so, you are more likely to accept the results if they are what you "expect" and discard them otherwise.
**This is dangerous.**
A big part of data analysis is knowing your data and knowing their limitations. If you are given data that is insufficient to make the conclusions you are asked to make, **you must say so and refuse to do the analysis.** Especially in the ridiculous case you used as an example, where you are expected to make generalizations about groups from two authors. This has nothing to do with *luck*.
You would never conclude in a scientific setting that Group A is taller than Group B based on n=1 in each group. Don't let yourself get into a trap where you attempt to make the same level of conclusion in another context.
I think you already know most of this, because you talk about stopping at the quality control stage, but if this is how all of your projects are ending then you are putting in way too much time towards a project without having access to the data that shows you the project is feasible. As soon as you are given data that is unsuitable, you need to tell your collaborators that it is not sufficient, explain why, and move on. This step should take 15 minutes if the data are truly as poor as you are describing.
Upvotes: 2 <issue_comment>username_4: Summary:
* Real research life with real world data is messy\*, and there will hardly ever be sufficient samples (my very personal prediction).
* There are huge opportunities (and needs) in working about small and messy data. Maybe that could become your area of research?
* Good data analysis work requires close collaboration. Actually already when planning the experiments, but for sure during data analysis.
Close collaboration will allow you to make them aware of your needs and that data analysis cannot work miracles. It is also necessary for your because otherwise you may be employing inadequate analysis methods.
\* When I say messy I don't mean bad curation (although I do see opportunities here as well - though maybe more business than research) but reality sneaking in with lots of influencing factors creating structure in your data where many (most?) data analysis approaches assume nicely independent data. I think this is a field that not only deserves more research but also that this has high practical importance.
---
I feel your pain. Been (almost) there as well. Actually, I still am (just my PhD is long done): so far with ≈ 15 years of professional experience in chemometrics, all the real world data I've encountered so far has one thing in common: too small sample size (even if it may look nice at first glance).
* One consequence I drew for me is that I started to do **research on such less-than-ideal situations** I encounter in practice, e.g.
+ small sample size situations: knowing that I have far too few cases (some orders of magnitude below rule of thumb recommendations), how to diagnose when things break down, how to stabilize models, what will break down, are there hard limits, etc.
+ in terms of messy data in the sense above (with lots of influencing factors):
I had situations where it turned out that biology doesn't really obey the disease classification medical doctors use (that was developed for totally different purposes, as I learnt later on) how to adapt data analysis methodology to these situations (that were somewhere in between classification and regression)
+ how to adapt validation/verification procedures in such situations
(I work very much along the lines that you can to whatever you think may work for modeling as long as you do a honest verification and validation of that model)
+ I see tons of similarly important questions that are unanswered.
To the extent that if you need such research ideas, I'd happily supply you with questions ;-)
---
* In my field, I think, it is going to stay like this: **well-characterized samples are expensive**.
In some respect you may even say that basic restearch isn't meant to have comfortable sample sizes. It's meant to find basic knowledge and point out promising possibility but the leg work of obtaining (and paying for) large sample sizes to make a method robust for routine use is something applied research/industry is supposed to do (and pay for). That point of view would say that taxpayer money should not wasted on work that industry could and should do.
* On the other hand, I often see unnecessarily too small sample sizes in academic research: too small here meaning that given the sample size even without any experimental data it is (or would have been if one had bothered to check) clear that no knowledge is gained because the study is too underpowered. This is clearly just **bad science**, and a total waste of experimental and data analysis effort.
If that's what you refer to in your question, it's going to be hard work to improve this - but don't give up! Science needs people like you pointing this out.
My experience with that is that as PhD student or fresh postdoc how much you can actually do to improve the data may depend very much on how much weight what you say has with your supervisor (or even top level director).
What you can (and should) always do is to clearly discuss the limitations in terms of possible interpretation of the results of your study - including in the manuscripts you write.
* To be fair, there *are* **practical limitations**. If we study a rare disease where the big university hospital gets maybe one sample per year, I tend to think that working with very few cases is necessary (but again: spell out the limitations). After all, one has to start somewhere.
Whereas, if we're talking easily accessible measurements of no particular ethical concern of a disease where the hospital sees 10s of cases per week then of course a thesis on 5 cases looks somehow lazy (not necessarily on the PhD student's side, though: The PhD student may not have been able to change pre-existing sample plans)
* One consequence for my PhD thesis was: as I did not only data analysis but also sample preparation and measurements for my thesis, I put in substantial effort to have more samples (fortunately I had access to a comparatively large data bank, but in the end also that approach was limited by availability of the more rare conditions).
I'd recommend to at least take a decided interest how the data are generated (get a lab tour, get the collaboration partners explain how things work and what the data mean).
---
>
> If the input is random/to few features to be statistically relevant there is nothing I can do, though.
>
>
>
Yes. Again, this needs to be clearly communicated: I do have the experience that applied groups may expect miracles from data analysis (and you may even have a particularly uphill fight here if this group in the past got data analyses that were badly overfit and thus looking overoptimistic and noone realized this).
In addition, you'll have to document that it isn't your "fault" that no nice results come out of this data. It's doable though (and again in my experience something that is needed in everyday data analysis work as well: I'm just having such a situation on my desk right now again).
>
> [...] "groups" only are represented by two authors, the text is sometimes just one sentence long and quite a few of the posts looking like they are produced by a random letter generator, not having any actual words in them. While I was promised at least 5 authors per group, minimum 5000 words per text and of course the post actually written by the author assigned to it.
>
>
>
A few thoughts about this. I "smell" some **communication/collaboration issues** here. Again, they are typical everyday issues in my research and data analysis work:
* I've met similar things due to fundamental **communication issues** (e.g. between statisticians talking about necessary sample sizes in the 1000s to answer a particular question and experimental scientists "translating" this to "many" ≈ 7.
* "quite a few of the posts looking like they are produced by a random letter generator"
Your experimental collaboration partners may have no clue what you don't know about their techiques (again communication issue): unless you have a background in these techniques you have no chance to recognize what is going in those noisy measurements and how to deal with them.
They could be anything from artifacts that should be deleted because the underlying mechanism that causes them is well known and can be disregarded over "the signal is hidden under this noise and you data analyst surely have some magic to get it of there" (not going to work, but typical expectation) to "your outlier is my most interesting case" - without the help of the experimental/data supplying folks you won't be able to adequately deal with this.
* Seeing all this makes me think whether you do have sufficient information about the background of the study to even decide which data analysis approach is suitable?
Upvotes: 3
|
2019/05/09
| 606
| 2,403
|
<issue_start>username_0: I already have a BS and MS of a certain degree. I am contemplating going for a 2nd BS or 2nd MS, in which my employer is willing to provide financial aid. The employer is generally supportive of me getting a 2nd degree with the belief that I will become a better employee, and I am interested in the fact that I can put it on my resume to potentially be more desirable to companies.
So there is the option to get a BS for this 2nd degree, or skip the BS and go directly for the 2nd MS. I'm not sure which option would benefit me more. I believe I have done enough related real-world work that I could easily stomp out the BS (though it would take longer), or at least not struggle too badly if I skipped the BS and went directly into the MS.
Would an employer see more benefit in having 1 BS and 2 MS, vs 2 BS and 1 MS?<issue_comment>username_1: Ask your employer. They're the ones funding you after all.
Having said that I strongly suspect they'll prefer a 2nd Masters as opposed to 2nd Bachelor's. Several reasons:
* Masters degrees usually take a much shorter time to get.
* You learn more in a Masters. Bachelor's degree are broader in the sense that you often have breadth requirements. You can't focus everything on your major. That doesn't apply to a Masters degree.
* You learn more advanced things in a Masters degree. If you have a Masters in [topic], you are assumed to be qualified for jobs that require a Bachelor's in [topic]. The reverse doesn't hold.
* Holding two Bachelor's degrees is pretty unusual. Why do you need two? Why didn't you do a double major?
Discuss it with your employer, but I strongly suspect they'll prefer a 2nd Masters (they might even be assuming you'll do a 2nd Masters).
Upvotes: 2 <issue_comment>username_2: In just about every case, it would be better to pursue only the MS degree. Most BS degrees require a large number of preparatory and introductory requirements, with the degree-specific material largely occurring only in the final 1.5 or 2 years in the program.
Do not get a second BS degree after you have already completed an MS degree. While some may question even the wisdom of seeking a second MS degree, if you will receive new training in a field that you have not explored before and you will learn new skills that make you a candidate for different career paths then it may be a reasonable thing to do.
Upvotes: 2
|
2019/05/09
| 812
| 3,580
|
<issue_start>username_0: I was curious what people mean when they say you need formal training in a field?<issue_comment>username_1: It means that this person has received explicit, recognized training, following the norms of the field. For instance, for a physicist this may mean having a university degree in Physics. For a medical practitioner this may mean having gone through med school. For a Linux sysadmin, it may mean having some recognized certifications.
The term is usually meant in contrast to learning through self-study or experience. For instance, we would say that a software developer has formal training if he has a degree in Computer Science, but not if he learned his trade solely by working through many online tutorials or through practical experience working in open source projects.
Upvotes: 5 <issue_comment>username_2: Formal training would be learning something by completing a degree, certification program, coursework or other *formal* program.
Informal training would be learning something outside of an official program. You might learn something on the job, teach yourself, or have a friend teach you.
Depending upon the specific context, there is a grey area as well. For example, a graduate program my only consider *formal* training to be university coursework. Or, learning on the job may give you a company-level certification. Depending upon the company, their reputation, and the program's rigor this may be informal or formal training.
For example, McDonald's has [*Hamburger University*](https://corporate.mcdonalds.com/mcd/corporate_careers2/training_and_development/hamburger_university.html) that provides training to company employees. Insides McDonald's Cooperation, this is would be formal training. Outside perspectives would vary depending upon who you talk. An academic program likely would not consider this to be formal training.
Upvotes: 3 <issue_comment>username_3: Within any one specific circle, it usually boils down to accountability for whoever might be employing that person or using their services. If your pharmacist says "this is X, take 2 a day for a week", you'd expect there to be a big government body to have signed off to say "She seemed to know her stuff, you can trust her" and its really this that you trust, not the stranger at the counter. How 'formal' a program is, is usually synonymous with how good "She must be good she did ". It's worth noting that its very rare that the training itself is the formal bit. It's usually the test at the end, whatever that maybe. Nobody's allowed to become a doctor because they went to medical school. They have to pass the course.
What about in general? There it means surprisingly little! There are very few fields that universally acknowledge an accrediting body. Let alone between fields. Both within and between fields the standards vary wildly. @RichardErickson's example of Hamburger university is a great example.
This is inescapable and perhaps desirable, as it should depend on the gravity and nature of the particular circumstances. There's not much need to prove you know what your doing to do crypt-analysis if you get somewhere you already have the proof, and that was the aim. On the other hand, if you were implementing or auditing an existing system, it's important for whoever hired you to be confident you met certain standards.
Equally "formality" of training for bar tending is less important than for pharmacy. Getting the wrong beer is less problematic thatn getting the wrong drug, so the chain of trust is less important.
Upvotes: 0
|
2019/05/09
| 552
| 2,405
|
<issue_start>username_0: I'm attending a summer school for graduate students, where I'm required to prepare a scientific poster about the research I'm conducting. However, the project is far from being finished, I only have some preliminary results.
How should I indicate on the poster that it's an ongoing work, and the presented results are not final yet?<issue_comment>username_1: This is a pretty common situation. Focus on the questions you are trying to answer with your research and a bit about the methodology. In many ways that is likely to be more interesting to many of the poster "viewers" than the results since it emphasizes work in progress. Many student posters are like this.
Say something about why the questions are important and worth answering, for example.
You can mention the state of your research, of course, including what you have learned so far, but with preliminary results later work might even result in changed directions.
Upvotes: 2 <issue_comment>username_2: >
> How should I indicate on the poster that it's an ongoing work, and the
> presented results are not final yet?
>
>
>
As @buffy mentioned, this is certainly not an uncommon thing to do. I actually really enjoy presenting work in progress via poster, because, as opposed to already-published research, the poster viewers may be more inclined to give feedback on the methodology and/or research questions/hypotheses.
It's not necessarily vital that you specify the research is ongoing on the poster, but if it makes you feel better you can say this is a *pilot* result or *preliminary* result.
An aside, but important, I suggest first figuring out **why** you are presenting the poster--that is, what do **you want to get out of this experience**? For example, are you at all interested in method feedback? Do you hope to refine your research questions? Are you going for networking only?
Cater your poster and your discussion topics to your needs while designing a poster which helps you achieve your aims while effectively conveying your ideas to the audience.
Also an aside -- don't stress. Posters are pretty fun, especially if you know why you're there.
Another aside -- if you know the conference attendees and there is anyone you want to meet or get feedback from, *email them*. Tell them you would love if they could visit your poster on Day X at time Y.
Upvotes: 4 [selected_answer]
|
2019/05/09
| 1,474
| 6,007
|
<issue_start>username_0: Advantage:
==========
Family support. My parents believed I should and are willing to offer some financial support. Actually they urged me.
I am currently in a big company doing relevant work and I did great.
*I want to.*
Disadvantage:
=============
GPA is not good for me, 2.6 is pretty low for applying to a PhD. From what I know about my friends, those who applied for a PhD program right after undergraduate often had a much higher GPA(3.7+) than what I have.
Financial support from family may not be that much due to many reason.
I only have one published result in an irrelevant domain as third writer.
Details of my background:
=========================
I lived in China and want to apply for a PhD abroad.
Academic years
--------------
I was graduated from one of the best university in my country with a bachelor degree at 2015. I spent my academic years in my teacher's lab and competition.
I have ACM/ICPC regional bronze, Meritorious Winner of ICM(Interdisciplinary Contest In Modeling) and National 2rd prize of MCM(Mathematical Contest In Modeling).
I cooperated with my teacher and senior, finished a paper about A\* algorithm as third writer.
I would say putting efforts in competitions had some side effects to my GPA but I know excuse is pale and useless.
Industry Career
---------------
After 2017 I worked in a big company as a NLP engineer, later in 2018 I took lead of NLP team. My job allows me to follow most new discovery of academia. During the process of applying those state-of-the-art result to real world I earned much experience with huge dataset in many ways.
Goal:
=====
I want to apply for CS PhD in NLP related domain.
In short:
=========
I want to have apply for PhD and I'm worrying about my weak background.
Any advice/experience is appreciated<issue_comment>username_1: It sounds like you have many industry experiences that would look really good on a PhD application. To be blunt though, your GPA is pretty bad. If I was reviewing your application, I would worry that you are one of those students who is very *capable*, yet also not very focused. Experience has taught me that students who show immense promise (publications, awards, test scores), yet have really poor GPAs, rarely amount to much as graduate students in the modern system.
While this may or may not describe you, students with low GPAs tend to also be the students who struggle to get to their teaching assistant duties on time and who cause administrative nightmares for the graduate committee. (E.g. failing all classes, but arguing that they should still be retained as a graduate student because they think they have a great dissertation). I'll admit that I personally only recommend PhD applicants below a 3.0 GPA for further review by the graduate admissions committee under very rare circumstances. (E.g. they overcame cancer or something). When I am reviewing applicants, low GPA is one of the easiest metrics to immediately discard an application.
**What to do:**
Your industry experience should be emphasised as an indicator that your GPA is not reflective of what type of PhD student you will be. Your letters of recommendation need to speak to your work ethic and dedication specifically. These are the very best things you can do. I would be much more willing to overlook a poor GPA if *everything else* was very promising for the applicant (GRE, publications, letters of recommendation, industry experience, in person interviews).
If you have external funding, some programs are much more willing to take you on. (If you are willing to pay the tuition, why would they turn you down? It's low risk for them).
I would also consider applying for master programs as a means of "proving" yourself to PhD programs down the road.
Upvotes: 2 <issue_comment>username_2: I would not imagine any respectable PhD program will accept you with no publications and a low GPA, even with some industry experience and willingness to self-fund.
From the university's perspective, admitting a wildcard to PhD studies is generally not a great idea.
1. Even if you're willing to pay for your school, your tuition is, in some sense, a minor issue for the university: you still may damage the university's reputation by doing a bad job teaching (essentially 'wasting' a teaching slot that could've been taken up by a more competent student), not publish enough (hurting the departmental metrics), or not find an advisor to work with. Reputation and publications are much more valued than tuition.
2. They don't have to compromise (at least in CS): there are so many amazing applicants coming in from undergraduate with a top-tier first-author publication (or more!). Why risk someone with a very different set of skills than the tried-and-true?
If your heart is set on a PhD, there are two ways I can imagine you can move forward.
One way is completing an MSc degree in a relevant discipline. If you're willing to pay your way, the bar is usually lower on masters' programs. You will need to show you've completely turned around - ace your classes, start a research project with some advisor and write a crazy good thesis. This will pave the way towards a PhD. It's not entirely wasted years - if you continue to a PhD in the same university, you may be able to get some courses you took for your masters count towards your PhD (so less workload in the first two years, more time to focus on research!): this is the case in my university, but YMMV.
The second option is to score an internship with a professor, and then (by doing an amazing job) convincing them to push for you to enter the PhD program. As a fair warning, mass emailing professors asking for internships will not work. They get several of these a day and they usually go unread. I would suggest targeting a few professors and reaching out to them in personal emails. This would take a longer time, but would probably yield better results.
Good luck!
Upvotes: 2 [selected_answer]
|
2019/05/09
| 746
| 3,146
|
<issue_start>username_0: I am a fairly new lecturer (within the last year) in the UK.
When I started my position, I did some negotiations and the uni was already happy to give me teaching relief for the probation period of 3 years (not completely, but pretty solid relief). I was told by colleagues in this negotiation phase that this was the only time I'd be able to negotiate until I won a grant.
Flash forward 9 months. I now have won a fairly large grant (about £200k). I'm excited about it. What kind of negotiation typically happens when such grants are won? I want to make sure I'm doing everything right in this court.<issue_comment>username_1: You want to ask some trusted people in your department whether such a grant would qualify as grounds to renegotiate your contract, as standards for this matter differ widely between universities and departments. Further, what exact grant you won matters much more than the sum - it's ultimately more about the reputation than about the money.
That said, my suspicion is that a 200k grant will not be a reasonable token for any serious renegotiation. While a very nice boost for a young lecturer, such grants are ultimately bread-and-butter for the university, not something that would trigger them to radically rethink your position and rank in the university. Usually, the only grants I see people occasionally use to renegotiate immediately are the top-level personal grants (ERC, or the respective national equivalents) - the type of grants that very few individuals in the entire university get, and which make a dean nervous that you might also get wooed by some other universities to leave and take your grant with you.
Upvotes: 5 [selected_answer]<issue_comment>username_2: The main factor is what the grant covers. If you applied for money to hire a postdoc, then that postdoc would not normally be expected to do any lecturing (under EPSRC rules, for example, and probably most other councils under the UKRI umbrella), so your teaching still needs to be done by somebody and the university is not getting any money towards that. You would then be expected to keep your normal teaching load. If the grant covers, say, 50% of your time for 2 years, then the university gets to keep half of your salary, and can use it to hire a fixed term lecturer. In that case you might be able to negotiate some teaching reduction, although not all universities honour these kinds of buy-outs equally scrupulously.
Even in the second example above, you should not necessarily expect to really have your teaching load halved. For example with grants from UK research councils, the grant actually only covers 85% of what you applied for, and the rest is covered by your institution.
As username_1 said, ultimately, it is totally OK for a new lecturer to be asking their colleagues carefully worded questions along these lines. If you do not start the conversation with an undue sense of entitlement, then your colleagues will happily help you evaluate the situation correctly, explain the norms at the institution (which can differ from place to place) and draw the right conclusions.
Upvotes: 3
|
2019/05/10
| 273
| 916
|
<issue_start>username_0: I got paper reviews and I found there is an item **scholarship**,
[](https://i.stack.imgur.com/JZuQM.png)
Can anyone tell what is scholarship score mean in paper review?<issue_comment>username_1: From the [dictionary](https://www.dictionary.com/browse/scholarship),
>
> *noun*
>
>
> 1. learning; knowledge acquired by study; the academic attainments of a scholar.
> 2. a sum of money or other aid granted to a student, because of merit, need, etc., to
> pursue his or her studies.
> 3. the position or status of such a student.
> 4. a foundation to provide financial assistance to students.
>
>
>
This is the first meaning - it's a measure of how much knowledge your paper adds to the world.
Upvotes: -1 <issue_comment>username_2: A guess but robably means good use of the pre-existing literature.
Upvotes: -1
|
2019/05/10
| 369
| 1,603
|
<issue_start>username_0: I applied for a university position and gave the named three persons as a reference: A, B, and C. I am not very much in touch with A, B, C, but I already have permission to use their name as reference.
Would the university I applied to let me know that they sent a request to those persons – and whether they answered or not? For example, would they state that they contacted A and C and only A has sent us the reference letter?<issue_comment>username_1: Normally no, the university contacts any or all of a, b or c as they see fit.
They do not have to tell you who or when they contacted them.
They may contact a in the first round and b or c in the second - if you get that far...
Upvotes: 2 <issue_comment>username_2: No. The candidate will not be notified when, or if, the reference letter writers are contacted and asked to provide letters.
There are also some rare cases where we might ask for a reference letter from someone the candidate *did not list* as a reference but who we believe may provide an important perspective. This latter concept becomes even more important during the promotion and tenure process when it is often the case that some letters are explicitly required from people *not listed* by the candidate. If you move into academia, you should get used to the idea that external reference letter writers will have an important influence on your career trajectory, that you sometimes can pick them and you sometimes cannot, and that you will usually not know that they were asked to write a letter for you unless you (bravely) ask them.
Upvotes: 1
|
2019/05/10
| 1,036
| 4,473
|
<issue_start>username_0: In many cases it so happens that multiple workers are involved in a research problem under the supervision of group leader. We will assume that group leader is the corresponding author of a manuscript. Now, I have the following queries:
1. Who will decide who the first author will be? I am talking about a scenario when all the co-authors are PhD holders. If there is a dispute, who has to resolve and how?
2. Who decides the sequence of co-authors in the manuscript, first author or corresponding author? Is there any international code of ethics in this regard?
3. Again, we have to assume that it is practically impossible to quantify the contributions of respective co-authors in a well-funded, well-qualified research group. And qualification sometimes comes along with ego. How can the corresponding author decide that the second co-author has contributed more than the third co-author, and the ordering will not change (whosoever had decided based on point 2)?<issue_comment>username_1: The meaning of author order varies across fields. In some fields, authors are listed alphabetically. In others, the order reflects some sort of perceived contribution. Some people prefer to be the last author.
The only unified code, and it is unwritten at that, regarding author order, is to talk to your authors and be considerate to their feelings. It is academic misconduct to leave someone out as an author. Similarly, it is wrong to list someone who should not be an author. This means that it is up to everyone to decide. If your research group is so disfunctional that this is hard, have those conversations at the start. In fact, I advise everyone to have those conversations at the start and during the process just to keep hard feelings from developing.
Upvotes: 4 <issue_comment>username_2: As far as I know, the only "international code of ethics" in this regard is that *all authors must agree on the author ordering, or the paper cannot be published*.
>
> Who will decide who the first author will be? I am talking about a scenario when all the co-authors are PhD holders. If there is a dispute, who has to resolve and how?
>
>
>
If there is a dispute, they have to negotiate until they find an ordering that everyone agrees with. Nobody has the right to put your name in a certain position without the consent of you and all the other authors. (It is irrelevant here who holds a PhD or not; the authors have the same rights no matter what kind of degrees they have or don't have.)
If the authors can't reach agreement, they can't publish the paper. A subset of the authors could decide instead to publish a new paper with the remaining authors' contributions removed or recreated from scratch.
Of course, the authors could agree that they are going to let a certain person decide (the corresponding author or whoever), and that they will follow that person's decision.
Commonly, the journal will require every author to sign a statement (or click a check box) that they approve all aspects of the submitted paper, including the author ordering. So if anyone disagrees with the ordering, they withhold consent and the paper doesn't proceed until agreement is reached. Or, in some cases, the corresponding author is asked to certify that all authors consent. It would be extremely unethical for them to certify this if the authors do not all agree to the author ordering.
>
> Who decides the sequence of co-authors in the manuscript, first author or corresponding author? Is there any international code of ethics in this regard?
>
>
>
Same answer.
>
> Again, we have to assume that it is practically impossible to quantify the contributions of respective co-authors in a well-funded, well-qualified research group. And qualification sometimes comes along with ego. How can the corresponding author decide that the second co-author has contributed more than the third co-author, and the ordering will not change (whosoever had decided based on point 2)?
>
>
>
As above, the corresponding author can't make any such decision unilaterally.
Upvotes: 3 <issue_comment>username_3: Since you don't specify the field, I'm not sure how helpful this will be to you, but recently there has been a movement in economics to encourage the publication of papers under a system of certified random author ordering. You can read more about this trend at <https://www.aeaweb.org/journals/policies/random-author-order>.
Upvotes: 1
|
2019/05/10
| 960
| 3,943
|
<issue_start>username_0: I wrote a paper with some coauthors who are more senior than me (I am a grad student) and it got published although not in a top journal. I thought I understood it and followed the proof line by line but I have been asked to give a talk about it and now I realize that the paper is not very good. We claimed to improve a result of some other people and actually one of our main theorems that we spend lots of space proving is a trivial consequence of theirs just worded differently so I'm worried that giving a talk about this paper will make everyone in the audience just think I'm/we're stupid for not noticing this. I was specifically invited by someone interested in the topic so I can't even talk about anything else. What should I do? Pretending I don't know it's bad gives me more chance to get away with it but seems dishonest. Owning up to the problem sounds like career suicide. And I don't have a good reason to cancel the talk but I'm thinking of saying I'm sick and I can't make it because I really wish this paper doesn't exist.<issue_comment>username_1: I'm pretty sure you aren't alone here. People find surprising things all the time. Be thankful that you are the one who found it.
This may sound risky, but I don't believe it is. Give a technical description of the problem as you normally would and then give an historical tour through what you originally thought and what you now think and why.
Rather than being career suicide, you will, IMO, be thought refreshingly honest and open to the truth wherever it leads. If you "get away with it now" but don't later, things will be much worse. Much much worse is if a student in your talk raises his/her hand with ... well, you know.
Out with it my friend. Just take a deep breath and say it all. You can even say that you feel a bit silly now that you've gotten a deeper understanding. Noting wrong about a deep understanding, of course, even if it eludes you early.
And, of course, your more senior colleagues and the reviewers, if any, also missed the point. You are a hero, not a bum. The math is what it is.
---
Another option, of course, is to say that you have found an issue along with the consequence but then talk about another topic. But I think that would disappoint people. Seeing the flaw can, itself, be enlightening.
Upvotes: 4 <issue_comment>username_2: username_1's answer is quite good, but it's important to add:
>
> Check with your coauthors that you're actually right in your assessment.
>
>
>
I've definitely had the experience of mistakenly thinking something I've proved is a trivial consequence of something else, when in actuality there was a slightly subtle reason the "something else" didn't actually apply. (And, yes, I've also had the experience of realizing *correctly* that something I did was a trivial consequence of existing work - or indeed simply trivial on its own.)
Another important aspect of the above is:
>
> Don't surprise your coauthors with this observation.
>
>
>
Think of it form their point of view: how would you feel if one of your coauthors were to give a talk on your paper, and midway through unexpectedly announced that they'd discovered that one of your results was trivial?
Upvotes: 4 [selected_answer]<issue_comment>username_3: In my discipline, part of the purpose in giving talks is to thrash-out ideas which might not be particularly brilliant, in order to get feedback from your audience, so that you can build on the ideas or find better ideas. Academia is supposed to be collegial, which means that we should give and receive robust yet constructive criticism on work-in-progress (and I think you should treat your paper as work-in-progress with potential for future development, even though it has already been published) in order to advance scholarship now and in the future. In other words, a talk need not be a perfect and earth-shattering result.
Upvotes: -1
|
2019/05/10
| 1,002
| 4,245
|
<issue_start>username_0: **My friend told me this story that happened to him a few month ago during an interview for an assistant professor position in a north-american university:**
---
I was on a campus tour as a part of a full-day interview program and was accompanied by two members from the selection committee: the department head and one of the senior professors (in his mid-50's). Everything was normal and they were showing me the facilities each building has etc. Now, we are at the gym and out of nowhere the senior professors tells me while pointing at the climbing wall:
>
> This climbing wall is the best place to watch young girls, I like doing that when I come to the gym.
>
>
>
The department head heard him and shushed him right away. He also told him to lower his voice otherwise someone will hear him and he might get in trouble (or something along this line). Honestly, I didn't know how to react, so I just kept quite and didn't comment. It still bothers me so much that I didn't respond to make him know how inappropriate his comment is.
---
My question has two parts:
1. Should he try to reach out to someone at the university (and who is this person?) to let them know what one of their professors *who deals daily with female students* said?
2. How should anyone react to such comments given that this person is on the selection committee? *I know this might be a Workplace question, but I felt it fits here given these circumstances.*<issue_comment>username_1: I'm going to just respond to the easier first half of the question. If this is in the US, every university has clearly designated "Title IX" officers whose job description is exactly to deal with this kind of complaint. The name and contact info of the Title IX officers should be easy to find via google. It seems unlikely to me that this remark will result in any serious action, given the typical response to much more serious allegations, but that office will know whether there have been other complaints and can keep this complaint in mind if there are further complaints in the future.
Upvotes: 2 <issue_comment>username_2: This comment is clearly inappropriate for a job interview situation, but in this situation the best response is to ignore it or change the subject.
The gym is a public place. In most locations, there is no right to privacy in public places. Anyone can watch anybody at the gym.
If you object to "watching young girls," I would point out that this is perfectly normal. According to Wikipedia, about 10 million people watch episodes of "The Batchelor," which is apparently a show about women of the same age you would see in a college gym. (I don't have a TV.) I'm pretty sure nearly all of those 10 million people also deal daily with females, without any difficulty.
The idea that watching certain people will lead to sexual misconduct is wrong and paranoid.
Upvotes: 3 <issue_comment>username_3: Although the comment was absolutely inappropriate, it wasn't directed toward your friend and had nothing to do with the selection process.
The department chair took notice of the comment in your friend's presence. We could hope the department chair will have taken appropriate action.
I think that's sufficient, especially since your friend is a candidate for a position. If your friend believes he or she must do something further, your friend could write to the department chair, saying something like, "I couldn't help but hear Professor X's comment. Have any students complained about his comments or demeanor? Should I report his remark, and to whom?" That puts the department chair on notice that you took notice, didn't like it, and are prepared to go further.
Upvotes: 3 <issue_comment>username_4: Interviewing for a job is a special skill. One must ask questions that elicit useful information and ensure that the candidate is talking most of the time, not the interviewer.
Neither thing tends to happen when academics conduct these interviews. They all believe they are fascinating and incredibly good at everything.
The inappropriateness you report is appalling, but it is just another corollary of academics doing all sorts of management chores they are neither trained for nor good at.
Upvotes: 0
|
2019/05/11
| 1,643
| 7,077
|
<issue_start>username_0: I am currently writing a proposal for a new research topic in my field of system biology. This proposal will determine what I will be working on for the next couple of years.
However, I am running into disagreement with my advisor on the viability of the research proposal. I think he is completely in the wrong on this one and I cannot convince him otherwise (after talking to him), and the research proposal is due soon.
Essentially, his vision is that a new method X (of a class of methods) might work "better" than exisiting method Y on our problem. So I should do research on method X for the next two-three years. I should put into my proposal that method X could work "better" than Y.
However, I am convinced that there is no evidence that method X will perform better than method Y on our problem.
My argument is that
1. Comparing method X and method Y is like comparing apple with
oranges. Method Z is the one that should be compared with method Y.
But it is already known that method Z does not yield benefits as
compared to method Y. In fact, they are the same, just different order of execution.
2. There has been no evidence that any method X
has performed better than method Y for any related problems.
3. In fact,
some instances of method X is the same as method Y. So the problem is ill-posed.
4. Other researchers are already aware of method X, but they do not use it. We can all guess why.
I think the line of research is fundamentally wrong and I cannot convince myself to write up a research proposal on something I disagree with and imagine spending the rest of my research career on. I almost feel like he is trying to fail me, because I read his rationale for pursuing this problem (benefits of method X) and the rationales were wrong or very hard to arrive at a conclusion at the present moment. Plus he is definitely not an expert in method X in any sense.
I am lost as to what to do. The proposal is due in 5 days and I cannot bring myself to write a single word. To be honest it feels like I am writing utter bullshit. I do not have time to quickly check method X (because it is an entire class of methods with some variations) and to compare with method Y in order to come up with evidence that it is indeed better.
Is there anyone out there who had these sorts of disagreement with their advisor on a research direction and how did you solve it?<issue_comment>username_1: Talk to your advisor - either you submit a proposal or you don’t.
Between you, if you both agree to submit, you need to work out a plan of X or Y or even a comparison of X to Y.
So four options:
1. No submission
2. X
3. Y
4. Some combination
Until you two find an agreed path and with time running out, if you don’t then you may well miss this proposal opportunity.
Upvotes: 1 <issue_comment>username_2: I have regularly had problems with my supervisor regarding scientific direction. I have always known that I write much better than I speak. About 6 months into my PhD I reached the conclusion that its possible that I am not able to coherently convince my supervisor regarding my ideas and reasoning.
This has in fact been pointed out to me by my PhD advisors and also the examination board. In their words, I have an extremely unstructured, abstract and convoluted line of thought. Although, I may reach at the same conclusion, my process is not easy to understand for other people.
**So what did I do?**
I adopted the show them before telling them process. So whenever I wanted to introduce new ideas to my supervisor, I would generate data for the same, create a presentation and then speak to them. This mostly meant that my supervisor got the point.
In you case, that may not be possible because your application is due in 5 days.
In that case, I would suggest you write up an alternative draft containing your vision of the proposal. Do this, but also complete the proposal your advisor wants you to write. Before submission, when reviewing the drafts send both proposals to him in a politely worded e-mail asking him to review both.
Upvotes: 3 <issue_comment>username_3: The best advice is to talk to the Program Officer. You didn't mention which agency you are submitting to. They have published goals about what they want to fund in addition to individual solicitations. Have you looked into any of this? Aside from any ethical decision about how to spend your time, your program officer may just say flat out they aren't interested in a type of work and can save you a lot of conversation and angst.
As a research administrator, I can tell you that hastily written proposals are not likely to get funded to begin with. There is a lot of administration involved on top of your own writing, and in general you need the administration to align with your project. If you don't know what your science is, you probably haven't done much with the administration, and that will cost you as well.
Upvotes: 0 <issue_comment>username_4: This is general advice and may or may not help in your current situation. Especially since time is short.
But mostly it is a warning that fighting with your advisor is not often a path to success. Ideally you want to graduate and make your own career untethered to your advisors and their ideas. You want to make your own way. But the realities of being a doctoral student are that the advisor has a lot of influence over and impact on your future - at least initially. Most places, they need to affirm the quality of your work, which means, generally, that they agree with it. Fighting with them doesn't get you there.
There are ways to avoid fights that may be open to you. One is to just go along with the advisor's view until you have the opportunity to substitute your own. Many do this successfully. Others cannot for various reasons. Another way is to find a more compatible advisor, even if it means changing universities, perhaps even starting over. This is an extreme step, of course, and causes disruption. But sometimes it needs to be done for ethical or even psychological reasons. If you are so strong in your views and *will not yield*, then you are probably better off finding a different path to success as it may not lead through this advisor. Whether yielding in the short term to enable long term success is at all attractive only you can say.
I'd suggest also, that making it a formal fight - going to higher authorities to impose your will over that of your advisor is probably the least safe option. You may win the battle, but be sabotaged for the rest of your career if the advisor also has a personality that *will not yield*. Try not to get into such a situation. It isn't likely to end well for anyone.
It is possible, for some students and in some institutions, to work completely independently of an advisor.
Finally, I suggest that you look for what seems to you to be the least disruptive path. You know the situation better than anyone here, but beware of the career destroying actions that might lie along the path.
Upvotes: 3 [selected_answer]
|
2019/05/11
| 937
| 4,165
|
<issue_start>username_0: 1. As a teaching faculty in mathematics, do I need to change the difficulty level of assignments after assessing the average class performance in the first couple of lectures? I am not sure if this is done across universities in graduate level courses.
2. Or do I continue to maintain the rigour irrespective of whether students are following(enjoying) the lectures. Should the teaching faculty worry about the faculty grades from students at the end of the semester course?
In the most common scenario, there will be students from weaker background, and we need to be responsible for them as well.<issue_comment>username_1: Just change the marking scheme.
Instead of all points for final correct answer, which is what some do, (and I found it brutal - a 20 point question graded as 0 as final answer had the decimal point in the wrong place).
Try doing partial marking based on relevant parts of the solution. I do do this, X points for this, Y points for that and you can make the final answer worth 30% or 50% whatever is appropriate.
This helps the weaker students who may not complete the question in the allotted time as they get points for the progress they made.
That way you don't have to re-write the assessment questions, just a marking scheme.
Upvotes: -1 <issue_comment>username_2: I would suggest that you step back and ask yourself why you are giving these assignments. Some people give out assignments purely so that they can grade students and then they think about fairness of grading, etc.
But, if you consider that it is through reinforcement and feedback that students actually learn then the solution becomes clearer. If you consider that the human brain works by actual physical rewiring - connecting synapses - then you see that students need these exercises to make the material clearer in their minds and so that they can move the ideas from short to long term memory. This is discussed in the book *The Art of Changing the Brain* by <NAME>. It is pretty fundamental to learning theory in general.
Thus, the exercises should be there to enhance learning, primarily.
So ask yourself whether the exercises you use are enhancing learning or not. Or are they just a stumbling block that some students can't get over? If it is the latter, then you should use different exercises that better enable learning.
My goal was always to teach every student. If I had a reasonable number in a class and *if they were willing to do what it takes to learn*, then I could be successful. But for many students it takes a lot of practice to come up to speed, as well as a lot of feedback.
But my goal was never to just *deliver information* and use what you want of it. Wikipedia does that job pretty well.
---
However, for your weaker students - those who are willing to do the work, anyway, there are other options. If your current exercises are good in some ways and you don't want to drop them, you can provide additional exercises for those who want to try them. These might be chosen to better enable good performance on the current set. But you should also provide some mechanism for the students to get some feedback on these new exercises. From you is probably best, but sometimes peer-review might be sufficient. Or even set up study groups among the strugglers.
Upvotes: 3 [selected_answer]<issue_comment>username_3: No and yes. The time to assess the class performance is not after a few lectures but after a term. Students will naturally adapt and the quality of their work will rise or fall with the expectations (to within reasons) of the instructor.
Changing the marking scheme mid-stream is not generally a good idea. If really the remaining students do poorly overall, the instructor can always discretely rescale the assignments to help the class before final submission.
In case of immediate doubt, ask the previous instructor (or another faculty member) for example assignments or comments on assignments to make sure your expectations are *completely* out of line with previous years. It could just be this is a habitually “hard” course where historical averages are low.
Upvotes: 1
|
2019/05/11
| 5,745
| 23,794
|
<issue_start>username_0: I am currently writing my PhD thesis and I was feeling guilty about the things and skills I could have attained during this period but I didn't. I have worked hard towards my research and have simultaneously worked on my hobby of painting and drumming during my PhD. Not that I joined a club or a band, I just studied art books, YouTube videos and practiced in solitude.
A bit information about me. I am not a brilliant student. Being average, I have struggled a lot during my PhD research as compared to my colleagues. This has led to spending a longer time on my research than my colleagues. It will take me a little over 5 years to defend. My colleagues and friends have taken somewhere between 4-5 years for their PhD.
At the moment, my only skillsets are doing research (owing to my PhD), make somewhat okayish paintings, drum somewhat, still love my hobbies and am a bit confident about presenting and public speaking owing to the conferences I have been to.
But, looking back, I feel like I should have invested my free time in meeting new people, learning more about public speaking, should have taken leadership positions and should have invested my time in doing things that would have improved my CV. In short, doing things outside my comfort zone.
How can I cope up with the regret about my failure of doing things that would have helped me academically and in industry? I am 30 years of age (fyi).<issue_comment>username_1: While sometimes it seems that spending more time working is equivalent to accomplishing more (especially in academia, where there's always *that one colleague*), this is not true in general. In particular for demanding intellectual work (as required in an academic environment) it is important to take breaks, vacations, and spend time doing other things to refresh ones creativity (see, e.g. Clark & Sousa, How to be a happy academic, or any of the numerous articles online).
I personally like to look at CVs / personal websites of successful academics, and it is quite common that they have some hobby which has nothing to do with their work which they spend much time on, like sports, chess, music, etc. So you don't need to feel guilty at all. It's very common that people struggle during their PhD studies, everyone has to figure out for themselves how to "make it work". If for you that meant spending time in solitude, that's fine.
>
> This has led to spending a longer time on my research than my
> colleagues. It will take me a little over 5 years to defend. My
> colleagues and friends have taken somewhere between 4-5 years for
> their PhD.
>
>
>
So it seems you didn't actually take that much longer. Another reason not to feel guilty.
>
> I should have invested my free time (...) doing things outside my comfort zone.
>
>
> (...) I am 30 years of age (fyi).
>
>
>
Why not start now? At 30, most of your life still lies ahead of you.
Upvotes: 5 <issue_comment>username_2: This feels like [Imposter Syndrome](https://en.wikipedia.org/wiki/Impostor_syndrome), of course. And I'll guess that you share these feelings with quite a large proportion of other recent graduates and finishing doctoral students.
There are two things going on here. One is that you know more than when you started and, if you have advanced at all, your standards are much higher. Looking back, you doubt that you met your current higher standards. But you were learning all along the way.
But the second point is also important. Perhaps your drumming and painting is what kept you sane enough through your studies to do a good job. Perhaps that is what gave you the mental breaks to let your mind work more efficiently along the way. Don't think of it as wasted time. Think of it as necessary relaxation that lets the brain recover.
I know one eminent computer scientist who has been a rock & roll guitarist since approximately forever. It is a necessary thing, not a waste of time and effort.
Upvotes: 4 <issue_comment>username_3: This is a classic issue experienced by many. As an enrolled student (whether undergraduate or postgraduate) at any half-decent institution, there are so many opportunities open to you, and nobody could possibly do everything that might be locupletative. Then, you graduate and those opportunities are much harder to find (and cost a lot more money!). In various student-feedback surveys, I have commented that it would be good if more of the opportunities offered by Higher Education institutions could be made available not just to current students, but also to recent alumni, since it is the years just after graduating when you might have the time and motivation to really get a lot out of the opportunities you did not have time to do as a student (obviously, a lot depends on the nature and extent of any professional and personal commitments).
As for a solution, all I can say is, "learning is for life, so be creative in making opportunities for yourself". It is usually possible to achieve something worthwhile even if you do not have much time and/or money, provided that you are patient and open-minded.
Upvotes: 2 <issue_comment>username_4: >
> I am not a brilliant student. Being average, I have struggled
>
>
>
Being self-critical is not uncommon. Balance critical self judgements against a comparable list of accomplishments however small.
>
> But, looking back, I feel like I should have ...
>
>
>
Feelings of remorse or regret are not uncommon, especially standing on the threshold of a milestone in life. Anyone with a realization of their own frailty will have them. Allow yourself the dignity to accept your choices as uniquely your own.
>
> How can I cope up with the regret about my failure of doing things that would have helped me academically and in industry?
>
>
>
This has two sides. First, the personal. Here, my immediate advice is to seek professional guidance. I imagine by now that you realize that you would never have gotten through your academic career without the professional advice that you had from your advisor to help you solve some of your more challenging problems. Correspondingly, you might then accept that sometimes you will also need to seek professional advise to solve your personal problems. We can spend forever in a forum discussing counseling methods, but in the end, you have to take this step on your own.
The second side is the professional. Here, my immediate advice is to talk with your advisor and with others that same level who you look up on as leaders. Ask for insights about how you can become better at networking. Ask whether any opportunities are still open to take the steps now that you regret that you did not take sooner (attending conferences with a presentation or becoming active in leadership in a professional organization).
Finally, you should not see this point in time as a door that is closing on a world that you *could* have had. You should instead learn to see this as a door that is opening on a world that you *can* make as your own.
Upvotes: 3 <issue_comment>username_5: You shouldn't regret decisions devoting time to yourself instead of work. What you should regret is making foolish decisions to do something you hate when you could be doing something you enjoy. Time spent doing what you love is never wasted.
You say that you have spent some of your time on various art hobbies. However, it doesn't sound like you regret the art - even in the context of how you could have done better in your PhD, you don't really say you wish you hadn't done the art stuff. You just regret not having spent more time on work, as opposed to not having spent less time on art. Based on this, I would assume that you made your decision to spend time on art rationally. You surely understood at the time that doing it would in some way occupy time you could have spent on work - but you did anyway because the art seemed like a more valuable and meaningful thing to do. There's nothing wrong here - you should always do whatever is most worth doing; life is too short to waste time on things not worth doing.
Now if you had some sort of compulsive habit where you constantly played drums to the detriment of your work, and even though you realized and understood that the drums were a bad idea, and regretted doing it even as you did it, but still couldn't quit - that would be something to regret, because you'd be making a decision that you know is not in your interests. But that is not your situation.
You might say that at the time it seemed like a good idea to develop these hobbies, but in retrospect it no longer appears so. This, still, is not a cause for regretting the hobby: At best you might conclude that you have developed better judgement and wisdom as to what matters in life. But that comes with experience; it stands to reason that past you lacked the experience, and couldn't be faulted for not making wiser decision, even if the decision *were* unwise (which I contend they were not, in your case).
Also, it's not a given that you "could have" spent more time on work. People like to ignore morale and pretend you can simply will yourself to do anything at any time. But this is not true - from areas such as management or military organization we know that morale can have tremendous inertia, and can even act as a force in its own right. So if you could not muster the morale to do more work, and instead spent your time on a hobby, there's realistically nothing being lost. Following this logic, to say that "I could have accomplished more if I worked more" is a bit like saying "I could have flown away if I had wings" - it's counterfactual thinking. If anything, the hobby might have helped you do what work you did accomplish, and hence went beyond not detracting from your productivity but contributed to it.
Consider another ridiculous hypothetical: You spend 8 hours every day lying in bed doing nothing. What if you just stopped sleeping - you would have 50% more time every day in which you could be productive! But the problem is clear: If you could even endure sleep deprivation for any amount of time, your productivity would quickly drop to a tenth or less. So your net output would become not 150%, but 15% of what it was before. It is a very similar situation with hobbies: Although unlike sleep, being deprived of hobbies doesn't kill you, the mind needs rest. Most people cannot simply work without any respite for months on end. Forcefully preventing them from hobbies usually results in productivity tanking.
But also, I think it's worth recognizing that regret or no, you don't have a time machine. You cannot go back to that time and play drums less and research more. You won't be doing another PhD. You might give advice to other young PhD students - but their personality will be different from yours, and it will come down to the basic principle of making rational decisions in line with your goals for your future. So I would look more towards how I can do my best today and tomorrow, rather than how I could have done better yesterday.
Upvotes: 3 <issue_comment>username_6: My PhD studies were fantastic times. Great people to be around, great parties, met girlfriends, met my wife, saw a lot of movies, did I mention parties?, and did all sorts of crazy things.
And also did some research.
Many years later I fondly remember this time, my PhD in hand (not really useful in industry, beside having a cool CV). Some of the best years of my life.
I will never get the Nobel Prize. I will never be chief advisor to the president when it comes to matters of national science. I will never be on TV as the go-to guy when it comes to explaining science. And I will never have groundbreaking discoveries which will have my name on them.
But, man, how I loved these years. Now that I think it off, I should have learned playing drums (seriously). In a band.
Please seize the day and stop worrying about insignificant things. In 20 years (not on your deathbed, just in 20 years) you will be telling your kids what a cool time it was. Also, be able to say then "did I mention that I was on a rock band?"
Upvotes: 3 <issue_comment>username_7: >
> But, looking back, I feel like I should have invested my free time in meeting new people, learning more about public speaking, should have taken leadership positions...
>
>
>
First - maybe that's true. I mean, I kind of feel the same way about my own PhD. Not about public speaking maybe, but I certainly could have done a whole lot more to make connections and make a path for myself into the field I was in. I kind of regret not having done that. I'm not going to tell you that you made the exactly correct choices and had the perfect priorities in hindsight; maybe you could have arranged your life better, maybe not. At any rate, it's not clear from your description that you should have sacrificed your passions during those years to better "get ahead".
Second - the environment is sometimes also to "blame". Did your department help create opportunities for PhD candidates to meet and interact with academics from elsewhere? Did your adviser introduce you to people and groups, and encouraged you to foster such relations? Did the department/your advisor present opportunities for you to speak in public, or made it easier for you to gain confidence in doing so? Does your department make an effort to place young researchers, especially Ph.D. candidates, in leadership positions (if only of limited scope or with supervision)? I would guess the answer is "not so much" or "not really". So it may not be their fault, but it's partially their responsibility.
>
> and should have invested my time in doing things that would have improved my CV. In short, doing things outside my comfort zone.
>
>
>
It's not the CV that matters, it's how you will live your life. This is easier said than done, but - we need to strike a balance between doing things we like and are passionate about and doing things we need, or have to, or can't avoid. Ideally these overlap so much that we don't feel we're making any significant sacrifice - but for most people, that's not the case.
>
> How can I cope up with the regret about my failure of doing things that would have helped me academically and in industry?
>
>
>
1. Don't try to psychologically deny or repudiate your past self and past choices.
2. Going into the future, try consciously planning how to divide your time between different activities so that you can "psychologically defend" your decision to your future self; and so that you don't deny yourself your desires and wishes on one hand, and don't ignore what "needs to be done" on the other hand.
3. Be around people who will support items (1.) and (2.) and who can make you feel better about yourself, or inspire you to do things which make you feel better. Those could be friends, family members, colleagues and/or a psycho-therapist.
4. Engage in some ongoing physical activity, i.e. don't be stuck in your room/house/office all day. That helps both your physical and mental well-being.
Upvotes: 2 <issue_comment>username_8: There is no shame in pursuing your own dreams instead of being an obedient lapdog.
It is more like you should feel like a useful tool / lapdog for someone else if you don't dare to follow your own instincts, visions and dreams and just take all the opportunities which show up in front of your nose.
Upvotes: 1 <issue_comment>username_9: Your PhD studies (at least, in physics) are used to train you as a person capable of doing independent research.
If your PhD work is deemed good enough to get the degree, that also means that you acquired the skills needed to complete it. More than the skills that you learned from it, it is more important that you learn the attitude of a PhD, i.e., figure out how to make things work, don't wait for people to give you the info you need but get it yourself, take informed decisions, accept your limitations and seek out help when needed. Oh, and learn to say "I don't know" instead of inventing a stupid answer (I learned this skill during my bachelor studies and served me well during my doctoral defense. I was told later that this impressed one of the members of the committee more than any other answer).
So, ask yourself the question: would you be able to make a study similar to your PhD thesis, without a supervisor? If the answer is yes, you are ready to have Dr. in front of your name and shouldn't have regrets. That being said, by now you should have realized that having a PhD doesn't make you a superhuman or super smart. It just made you more capable than when you started :-)
Upvotes: 1 <issue_comment>username_10: When people tell me I shouldn't regret something, it never made sense; how could I decide not to regret?
What I've found useful to to classify events as 'over' and 'next'. When I feel myself struggling with a problem, I ask myself: is this event in the past? What's done is done. It's a reminder to fight battles in the present, rather than repeatedly revisiting battles over things that already occurred. We can accept them without approving of them.
Upvotes: 2 <issue_comment>username_11: Frankly, as a line manager in a large IT company, I meet many freshly finished students applying for job positions. I don't even look for what they can or cannot do. I fully expect them not to have any particular knowledge that would directly help them in the job (except for a general inclination towards IT, which is broad enough that even that is not a given). The degree tells me that they had the grit to get through, and the individual marks give me a somewhat more detailled picture (and maybe highlights points which I should dig at in an interview).
There are exceptions, of course, but without fail they are an exception not because they paid more attention at the uni, but because they were "geeks" even before joining the formal education system - i.e., they hacked at computers from young ages and are mostly self-taught; uni simply gave them a bit more theoretical background on top of that. I myself was in that category, and absolutely *everything* I brought to my job was my own, nothing from the uni (except the degree).
I heard the same from, for example, engineering people - if a brand new engineer gets his first job, nobody expects him to engineer some critical part, building or bridge without intense coaching and review from experienced people. The real learning starts when they *leave* uni, not when they begin it.
This is not to say that university level education is superfluous, far from it. I absolutely had many cases where some task was quite easy for me (compared to colleagues without an IT education) because of the broad background knowledge I received at my university. But the assumption that just because you have your degree makes people expect you to be "complete" would be far-fetched at least in my particular area. Which may or may not be true for all other fieleds.
So: enjoy the end of your PhD, and then just move on. Regret never helps anybody. I'd say many if not most people feel like you do. Also, expect this to continue forever... it is more a part of your character than "reality". The sooner you recognize and reconcile with that, the better.
Upvotes: 2 <issue_comment>username_12: As they say, no one ever says "I wish I'd spent more time in the office" on their deathbed. Instead, they say "I wish I'd spent more time with my family/hobbies/traveling/etc." Do these people also wish they'd earned more and learned more? Quite possibly, but does that really matter? IDK.
My answer really starts here, though:
=====================================
>
> But, looking back, I feel like I should have invested my free time in meeting new people, learning more about public speaking, should have taken leadership positions and should have invested my time in doing things that would have improved my CV. In short, doing things outside my comfort zone.
>
>
>
What's preventing you from continuing to learn on your own? There are plenty of places to learn things online or IRL. There are even places online as resources for IRL learning. One of them is MeetUp.com, where you can get together with people with your own interests and do that interest. This includes computer sciences, talking over coffee, book clubs, electronics, aviation, public speaking, dog walking, whatever. Your local public library might have information on local clubs, too. Some of them might even meet at the library.
You specifically mention public speaking, so find a local Toast Masters club. Maybe find a club that's into your hobbies and make presentations to the club about the hobby. Use the skills you currently have to build up or reinforce the skills you don't have.
Better yet, find a local non-profit and become a Board member. This will help with ***all*** kinds of things on your regret list: public speaking, leadership activities, meeting people, improving your CV, and much more. From grant writing to budgeting, running committees to debate, conflict resolution to managing project, being a Board member will teach likely you how to do a little bit of everything and tax what you think you're already learned.
Your real question was "How can I cope up with the regret about my failure of doing things that would have helped me academically and in industry?" Well, do the things now that you didn't do before. Then your regret will be severely undermined due to you actively doing what you regret not doing. It's hard to regret doing something 10 years ago when you did it 2 years ago. It might not fix your feelings immediately, but it will help in the future.
As another Answer mentioned, the hobbies you do during your PhD probably kept you from burnout and likely allowed you to stay sane to complete your degree. This is very good and not to be regretted. If you had "done all the things" while in your degree plan without time to relax and enjoy yourself, you might very well have quit and then you'd have a major regret. I'm not saying this hypothetical situation reduces your current regret, but take some solace in the fact that you don't regret not having finished the degree. Sometimes seeing a worse side of things can make your current burden feel a little lighter.
"But did you die? No? Well there's that." That might be a little harsh, but sometimes you have to be happy you're still "on the right side of the grass." :-)
Good luck, happy trails, and hopefully you found something useful in all the myriad of answers here!
Upvotes: 1 <issue_comment>username_13: Keep it simple
==============
Regret and shame are strong words. These feelings should be reserved for grave circumstances. For instance, betraying a loyal friend. (This is not a criticism of the way you feel—it is just a perspective that I believe is wise to adopt. So try to assume this perspective and you may find that the feelings evaporate.)
We make mistakes all the time. Repeating a mistake that you aren't aware of is nothing to be ashamed of, nor regret. Becoming aware is fantastic; you have an opportunity to learn and change. But you don't need to "fix" everything about yourself. If you know anyone who is perfect, you don't know them very well. But I'm sure you know some people who are wonderful, without being anywhere near perfect. So just because there's something about you that could be improved, it doesn't necessarily mean that changing it should be your priority.
Moreover (as many have already said) the way you have spent the last few years may not be a mistake at all. In any case, embrace it, and embrace the future. Make the best choices you can, but don't expect unreasonable things of yourself.
Upvotes: 1
|
2019/05/11
| 871
| 3,872
|
<issue_start>username_0: I am transferring to a new university. And I found a new supervisor at another university. I have told this to my current advisor. But I am still staying at the current lab as the transfer procedure would take like two or three months.
During the two months, I have been collaborating with my new supervisor on the paper and using his experimental facilities remotely. The paper is now almost finished and then we are going to publish. I will consult with my current and future supervisor. But before that, I want to ask if there is a regulation about this.<issue_comment>username_1: There don't seem to be *regulations*, as such, about authorship. There are, however, conventions that vary by field. There is also scientific honesty to consider. If your current supervisor contributed then s/he is likely someone eligible for authorship, otherwise it would be only by "courtesy" which is, itself, a problematic and possibly unethical practice.
However, the conventions in some fields are so strong that a student better not dare to break them.
Note that in some other fields, only the student would be an author and neither supervisor. But in those fields the supervisor probably doesn't provide a physical laboratory and all that goes with it. But acknowledgements are given for contributions in any case.
Consult someone in your own field that you can trust to let you know about the conventions and the strength with which they are "enforced."
However, also note, that if it is in accord with the conventions of your field, then including your current advisor, even with minimal contribution might be a valid *political* act if it advances your career in some way. Many would, of course, think this a terrible thing to do, but their fields likely have different conventions.
I assume that in the fields in which "authorship" is given away for little work, that readers of papers understand the situation pretty well and are less likely to be misled about who does the work than you might suppose. Thus, in a certain sense, you can also consider such things to be moot.
Upvotes: 3 [selected_answer]<issue_comment>username_2: Everyone who makes a sufficient contribution to your paper for authorship should be an author on your paper (and no one else).
**It doesn't matter what institution they are at nor does anything else: everyone that contributes at a level appropriate for authorship is an author.**
Standards for what contribution constitutes a sufficient contribution for authorship vary by field, and your plan to consult both your new and old advisor is a good one because part of their mentorship is in helping you make decisions about what governs authorship in your field.
Upvotes: 3 <issue_comment>username_3: >
> During the two months, I have been collaborating with my new supervisor on the paper and using his experimental facilities remotely. The paper is now almost finished and then we are going to publish.
>
>
>
Assuming (as the above statement implies) that your current supervisor had no involvement commensurate with authorship, then this sounds like a clear-cut case that you should **not** add your current supervisor as an author. Adding him/her as an author would be unethical, because:
* he/she did not do the work;
* naming him/her as an author would imply his/her endorsement of the paper, which he/she may not have even read (and with which he/she may, in reality, disagree strongly or consider unsound); and
* he she would be expected to be able to answer detailed questions about the paper, its methodology, and any underlying data (although a paper specifies a "corresponding author", a reader is entitled to expect **any** of the "authors" to be able to comment/present/lecture on it, and an author unable to do so would, quite rightly, suffer serious professional humiliation).
Upvotes: 1
|
2019/05/12
| 2,823
| 12,062
|
<issue_start>username_0: I'm a total layman, but sometimes I have really random specific questions, like What does science say about the transfer of learning, or what is the distribution of different of sexual fetishes in the population, or what do we know about how English warbow training evolved, etc.
Honestly, I'm a total noob, sometimes I find interesting things in google scholar by using it kinda like google but usually, I don't. If I'm very lucky a science journalist has written something on it, but often they don't and even when they do they can totally mislead, especially the popular ones.
For posterity, the answer I have found is that intro textbooks are the best in the very likely event there is no good popular science right up.
Though I was hoping for a faster way to find out what the consensus view is, especially for topics where I want to know what it is without having to wade into anything else or learn the how.<issue_comment>username_1: Good question! If you are a **total** layman **popular scientific magazines or blogs** like the *Scientific American* from *Nature Publishing Group* or [scienceblogs.com](http://scienceblogs.com) are a first source to spot which kind of views are represented in the community. Articles therein are mostly written by current or former academics and scientists with educational background in a scientific branch and contact to universities and researchers. And they skim the most important articles in the primary literature or visit conferences in their branch. But don't rely on single articles, **always compare several sources**, but the necessary background to understand such articles is often lower as a broader audience is focused by those publishers.
From there you could dive deeper into the **scientific primary and secondary literature** over Google Scholar/Books by searching and reading **review articles** that summarize the longer or recent past of a distinct scientific field. In the best case such *review articles* are written by several authors. In scientific fields like for example dark matter physics you will not be able as a layman to get a picture how much percent roughly believe in the current paradigm or an alternative theory. Searching on Google Scholar with `intitle:"name of theory/paradigm"` might give you some hint how much researchers work/favor alternative theories and open questions.
(Hand) **Books** written by several leading scientists in a field are in general a reliable source, though often not covering most recent developments in a distinct field. **Textbooks** will often need a solid background in distinct underlying theories (consensus on these?), even for academic graduates and interdisciplinary researchers in a field. I think a total layman cannot understand them and textbooks are often written by experts of a distinct view/theory, not of competing theories, and not the main spot to discuss alternative theories, rather journals.
Concerning life sciences, published **meta studies** that analysed and evaluated the data of many published smaller former studies that refer to a distinct scientific question like for example "dying of bees" are a good first source. So here we have to distinguish between views/theories and data. But, if meta studies show that the data amount is too low or contradicting other data, then there can be no consensus concerning a distinct question.
If this all doesn't help you, [**skeptics.stackexchange.com**](https://skeptics.stackexchange.com/) is a very good site to ask which theory/cause is currently favored by the majority of the scientists or what the data favors. But like [Scholarpedia](http://www.scholarpedia.org/article/Main_Page) and [Wikipedia](https://en.wikipedia.org/wiki/Main_Page) you cannot be sure the answers or articles are written by scientists with educational background in the related field. But as a layman, I think it is rather important to know if the majority agrees, are there ongoing discussions, is the scientific community in a field split up, what is the current paradigm and how much research is ongoing on alternative theories and open questions. Popular scientific magazines/blogs normally cover such questions. If you are interested in more details, asking on a related scientific sites on Stack Exchange is another option to get often a discussion/answer by several scientists or students in a field.
---
I agree with the comments and want to explain that one major problem with finding a "consensus view" in the scientific literature is that basically it is the **job of scientists to try to falsify the current paradigm/theory/consensus, especially when there exists a strong consensus, but the theory is incomplete or doesn't explain everything sufficiently**. But one also has to **distinguish here between theories and facts/data**. If you ask for instance if dark matter can be the only add-on to explain rotation velocity of stars in galaxies, then most astrophysicists would currently favor this explanation/view, though there are also astrophysicists who work on alternative theories/explanations. Those might get also attention (more than they deserve) in popular scientific magazines and this good from my point of view to foster falsifying theories. You could also ask is there a consensus on the general relativity theory and likely most of the astrophysicists would say it is currently the best theory we have before a unification of all physical theories. There would probably be a bigger consensus that space-time is a physical entity than general relativity is the final theory because, again, there are few theories in physics that have a "final" tag. The evolution theory in biology is maybe not perfect and covering everything, especially genes and epigenetics, but mutation and selection is the paradigm I would say 99% of biologists agree with. For climate science, to my knowledge, most in the community (>90%) agree that the data in conjunction with theoretical simulations points to human-made global warming. I show these examples to explain to you that **there might be a strong consensus on data rather than a theory concerning a distinct question**. In popular scientific magazines/blogs this difference can be more undermined than in scientific journals or books.
The minorities are and have to be covered in popular science, maybe also in a stronger kind than their alternative theory/view is really represented in the community. So **if you really want to know if the majority believes in one theory, if there is a consensus, always check several of above sources like wikipedia, popular scientific magazines and blogs!** You can also make a poll among scientists on consensus on data and/or theory, and **Stack Exchange** is maybe the best place for this currently, if you chose **sites with a high density of scientists or expert/layman ratio frequenting it like mathoverflow.se or cstheory.se**, stackexchange sites like physics.se are too much diluted by laymen for a poll to a mainly scientific audience.
Upvotes: 5 <issue_comment>username_2: First off
>
> sometimes I find interesting things in google scholar by using it kinda like google but usually, I don't.
>
>
>
you should not be doing this, because it will mostly result in you not finding much, as you've experienced. This is not because you're a layman, **but rather you are not asking a very specific question**. Scientists in general do not probe questions which are as broad as
>
> what is the distribution of different of sexual fetishes in the
> population
>
>
>
This question is extremely broad and complicated because it talks about "different sexual fetishes" in the entire population. It needs to be more specific after taking into account a lot of different factors. Let's take for example fetish A, and the country of England. We can then start to formulate a very specific question.
>
> How has the portrayal of fetish A through long-duration televised media affected its perception in the millenial Indian population in England?
>
>
>
or a bit broader
>
> Exploring the positive and negative sentiments millenial Indians hold towards fetish A across England
>
>
>
I do not think, this is a question you would ask.
You need to take a holisitic approach towards exploring science. In no certain order,
* One of the best ways you can learn about science is by connecting with scientists and science communicators via twitter.
* Use Wikipedia as much as you can
* Become a part of citizen science projects
* Subscribe to science channels on youtube (my favourite one is Kurzgesagt)
* Subscribe to print publications like the Atlantic, which has an extremely good science section. I would suggest New Yorker, but personally I don't find it to be very consistent.
* As said before, ask your question on Skeptics.SE
* If you do keep on using google scholar, use the filter panel on the left to look for only Reviews and filter for articles published in the last five years.
Scientists will almost always have differing viewpoints regarding the existence, reasons behind, and median income relationship of fetish A. So when you do read a review, be sure to pay close attention to the authors. If you have read a review by the same author before, skip it. We scientists are humans, and we tend to push our own views in the reviews we write. A completely impartial review is hard to come by.
Upvotes: 3 <issue_comment>username_3: I like this question very much, and was curious to see the answers and hoped I might be able to apply some of them.
On reflection I realized that, unfortunately, this question cannot be sufficiently qualified such that it becomes answerable.
Academia and study are fundamentally an imprecise process of guesswork. Our brains are not 100.0% perfect at derivation, inference and system modeling. The majority of our most significant and useful discoveries were accidental and/or the result of arbitrarily being in the right place at the right time.
Not only are we using torches with all-but-flat batteries in them, we're really touchy about the correctness of the mental models we've worked on thus far. (Especially when research trusted for years abruptly gets thrown out...)
We cope by this by banding together and forming superstitious cliques of agreement about which is The Correct Answer (or, in some cases, The Correct Opinion) to a particular problem or domain. Those with the best marketing skills stay in the race as long as the perspectives and models they propose remain just beyond our abilities to fully reason about them.
I may well be wrong, but I ***think*** that you're not really asking for information, but are instead trying to find the social group you resonate the most with.
I don't fit in either. It's okay.
Upvotes: 0 <issue_comment>username_4: For a layman, listening to the podcasts could be a nice shortcut. But not just any. There are some podcasts out there in which scientists discuss their ideas and their challenges. I personally found it a great way to understand the consensus of their field.
Of course, since it's a conversational medium, there can't be much math/dirty works. Yet, it's not free of it either, meaning that you should have some prerequisite on the concepts at least to follow.
The main problem maybe is that and not all the fields have such podcasts. Additionally, I also find it hard to find the correct podcast. I list the ones I know, hoping that the recommendation system of the podcast app you use guides you to some show that you might be into.
* [Too lazy to read the paper](https://toolazy.buzzsprout.com/) by <NAME>.
* [Stimulating Brains](http://stimulatingbrains.org/) by <NAME>.
* [The joy of x](https://castbox.fm/va/2565854) by <NAME>.
Upvotes: 2 <issue_comment>username_5: If only there were a [network of Q&A websites](https://stackexchange.com/sites) where you could post such queries and get answers from experts, with a voting mechanism to establish consensus (or lack thereof). ; )
Upvotes: -1
|
2019/05/12
| 4,280
| 17,795
|
<issue_start>username_0: Many academic papers, particularly in mathematics and similar fields, use the phrase "it is easy to see that..." (e.g. in a mathematical proof). I never understood why this sentence is used. Such a sentence is inaccurate at best, since it is not easy for *everyone* to see; maybe it is easy for the author or some of the readers, but there are certainly readers to whom it is not easy.
Authors often try to shorten their paper as much as possible, so it is not clear why they would lengthen their paper by an inaccurate sentence.
Is there any good reason for an author to use this phrase?<issue_comment>username_1: Mathematical papers, especially, and many others, are not written for the complete novice with no understanding of the field. One tries to tailor the explanation of a new concept to the general level of understanding of the audience and tries to avoid being overly pedantic giving every detail and argument back to Euclid.
College lecturers often use such phrases, knowing the level that their students should have attained and, thus, permitting the lecture to flow more smoothly and be less boring.
But writers do the same thing, envisioning the reader of the paper, or perhaps just the reviewers. If I say such a thing and the reviewer calls me on it, then I know I have to say more. But I trust that the reviewer can validly stand in place of the audience and will be able to *easily* fill such gaps.
When such a phrase is used (we can conclude X) it implies that X flows from the previous statement(s) and isn't just an unsupported statement inserted into the flow. It indicates a hopefully simple, but unstated, argument.
The alternative of including the complete argument is longer, more pedantic, more boring papers.
The other alternative of starting every sentence with "Therefore,..." is also stilted and, eventually, boring. Some variation of phrasing makes the reading more pleasant. Writers, in general, like to have a repertoire of, more or less, equivalent phrases to help the flow.
On the other hand, it has happened that the statement is used incorrectly when it takes a lot of argument and deep insight to go from point A to point B. Sometimes the author just doesn't notice the width of the gap. But, for the most part, it seems to work out.
Upvotes: 6 <issue_comment>username_2: The author most often writes this to justify that he does not include a proof, that you can do yourself in a reasonable time. This assumes that you have the basic knowlege that is required in your field to understand the average paper.
If you think you're missing some parts and cannot see why it is easy, you usually can look up the "easy to see" (after eight pages of calculations) proofs in other manuscripts, text books or in the internet.
If they would include this proofs, the reviewers and editors would ask them, why they proof something, that is known from text books or "easy" to see. A paper should contribute something new and should not (and cannot) proof any details that it builds up on.
For more stuff that is not well-known or more complicated, the authors should provide a citation and write "the derivation of the formula and the proof can be found in [42]".
But when you cite a text book, then people can ask why do you cite text book A and not text book B? The correct citation would be the original paper, but it will usually be much harder to understand the topic in the original paper than in a text book.
And there are of course some hand-waving, when the author knows something to be true, but would need more time to proof it himself and does not consider it to be worth the effort.
Upvotes: 3 <issue_comment>username_3: One common way that I interpret "It is easy to see X" is as an ever so slight shortening of "it easily follows that X" (which I prefer). This itself is a shortened version of "The preceding assertion implies X", or perhaps a pluralized version such as "The preceding assertions (or definitions, or assertions and definitions, or...) imply X".
There is information contained here, namely a logical implication, that would be lost if one simply wrote "X".
Upvotes: 4 <issue_comment>username_4: Papers are not written with exhaustive detail. There is always a balance between explicitly stating your reasoning and sticking to the topic at hand. Sometimes there are points that don't take great insight to understand, and would be easy (though perhaps lengthy) for the reader to derive on their own. This is the first function of "it is easy to see".
Another aspect is managing tone and audience expectation. Papers are written for an expert audience with the premise of pushing the bleeding edge of human knowledge. If the author of a paper states something obvious or well known, the reader may be confused: If it is emphasized in a modern paper, could it be that the author is not just a fool restating things everybody knows, but actually means to state something different? The competent reader could be confused by what the author meant to accomplish in stating something already well known or "easy to see". So the author preempts this, by acknowledging the point as something well-known and not novel, but mentioned for clarity and/or as a reminder, and not something to be taken as a novel or interesting claim. This is the second function of "it is easy to see".
Some people like to criticize statements like "it can be shown" ("then why don't you show it?" - because it would be a distracting digression) or "it's obvious" ("then why say it?" - to show that the author is aware of it). They assume it is an exercise in arrogance on the part of the author. However, written communication is not just the text. There is also subtext and context. Well-structured text has a central point and a coherent tone. Going on every possible tangent, digressing into topics of wildly different level, does not necessarily serve these qualities. As such, phrases like "it is easy to see" can serve an important point in enabling effective communication of novel findings to an advanced, technical audience that has less time available to read than there are papers of interest.
Upvotes: 3 <issue_comment>username_5: Several times during seminars I heard the following exchange, which sounds like a joke but actually isn't:
*Audience member:* Why is X true?
*Speaker:* Oh, it's obvious.
*Audience member:* OK, thanks! [sits down satisfied]
---
Especially in mathematics, it is often helpful to let the Reader know what level of difficulty and/or complexity each step is. The approach you (or at least I) would take differs significantly depending on this.
If a step is supposed to be easy and it's not quite my field, I'd take note that this sort of transition is standard in these parts and move on. Conversely, if the step is supposed to be easy and it is my field and it doesn't seem easy at all, then it's a strong indication that either the author is wrong (rare) or that my understanding is insufficient and I should think about this transition *until it seems easy* (not just until I find any way to justify it).
If a step is supposed to be moderate in difficulty, I might or might not work out the details myslef as an exercise, but I will also be aware that some amount of work goes into making it possible - helpful when evaluating if an argument is plausible and also in identifying where the content of the paper really is (especially in papers that are heavy on definitions, it's often a nontrivial task to figure out which transitions are easy but unfamiliar and which are responsible for the actual progress).
Finally, if a step is said to be difficult then there obviously should be a reference or a proof. If it's a proof, then more likely than not this is the point where the progress is being made in the paper, so if I'm reading the paper this is the part I would study in the most detail. If it's a reference, I would make a mental note that it's a potentially strong tool to keep in my arsenal - also, I would know better than to attempt to reproduce the result myself.
Note that it's not always all that easy to determine which is which without the Author explicitly making a judgement. The short phrase *"By Thm. C in [42] we have X"* could expand into either of *"It is easy to see that X (see Thm C in [42] for details)"* and *"Because of the deep theorem of Smith (Thm C in [42]) we have X"*.
---
Having said all of the above, I want to add that personally, I dislike the phrase *"It is easy to see"*. I understand the sentiment, but if the paper is read by anyone other than the experts in the field (maybe undergrads or experts in another field) then chances are it's not going to be easy to see to all the Readers, and the Readers who don't find it easy might feel bad about it in one way or another. If I'm already taking the time to explain myself for not explaining a transition I try to give some more details: either a super-short sketch of a proof, or a phrase like *"It follows by an application of standard techniques that ..."* or *"A simple but mundane computation shows that..."*, etc.
Upvotes: 6 <issue_comment>username_6: My personal opinion is that "it is easy to see" should really mean that it is easy for a reader who is the target audience to see, and can be used when that part of the proof is not an important piece, such as if it is a routine or tedious but uninteresting calculation. For example, it is better to write "it is easy to see that algorithm A runs in O(n^3) time" if it is just a bunch of for-loops that obviously take O(n^3) time, than to give a long and tedious proof just for the sake of formal rigour.
That said, it does happen that sometimes what one thinks is "easy to see" is not so easy for others to see, or perhaps even oneself after a few months of not looking at it. Hence having reviewers helps.
Upvotes: 3 <issue_comment>username_7: There are no good reasons for phrases like that one. If something is "easy to see" or is "obvious", then there is no point in emphasizing such truism.
However, if no additional knowledge in needed and the following steps are quite straightforward, then one can use the advice form the *Mathematical writing* article in *The Princeton companion to applied mathematics*:
>
> The question of how much detail to give is related to the question of
> how formal to be, but it is not the same question. It is true that
> there is a tendency in informal mathematical writing to leave out
> details, but with even the most formal writing a decision has to be
> made about how much detail to give; it is just that in formal writing
> one probably wants to signal more carefully when details have been
> left out. This can be done in various ways. One can use expressions
> such as “It is an easy exercise to check that...,” or “The second case
> is similar,” which basically say to the reader, “I have decided not to
> spell out this part of the argument.” One can also give small hints,
> such as “By compactness,” or “An obvious inductive argument now
> shows that...,” or “Interchanging the order of summation and
> simplifying, we obtain....”
>
>
> If you do decide to leave out detail, it
> is a good idea to signal to the reader how difficult it would be to
> put that detail in. A mistake that some writers make is to give
> references to other papers for arguments that can easily be worked
> out by the reader, without saying that the particular result that is
> needed is easy. This is straight- forwardly misleading; it suggests
> that the best thing to do is to go and look up the other paper when in
> fact the best thing to do is to work out the argument for oneself.
>
>
>
Also from *How to read and understand a paper* in the same book:
>
> In mathematical writing certain standard phrases are used that have
> particular meanings. “It follows that” or **“it is easy to see that”
> mean that the next statement can be proved without using any new ideas**
> and that giving the details would clutter the text. The detail may,
> however, be tedious. The shorter “hence,” “therefore,” or “so” imply a
> more straightforward conclusion. “It can be shown that” again implies
> that details are not felt to be worth including but is noncommittal
> about the difficulty of the proof.
>
>
>
Upvotes: 4 <issue_comment>username_8: >
> "You'll notice that ...."
>
>
>
First person always reads smoother, commands attention, and prevents overwtiting.
That said, you'll want to be clear who your audience is at the outset of the publication. State it explicitly who should be reading the text, and the assumed knowledge level of the subject matter.
Upvotes: 1 <issue_comment>username_9: "It is easy to see" should be used sparingly. If one can say ***how*** it is easy to see why the statement is true, in a similar number of words, then this strategy is prefered. Alternative, but more specific expressions, than "it is easy to see" include
* After routine algebra, ...
* By theorem X, ...
* By definition of Y, ...
Note the phrase "It is easy to see that" is actually longer than all of the above.
Sometimes the reason why it is easy to see something cannot be explained in so few words. In which case, if it is truly easy to see for the target audience, using this phrase is useful to signal to the reader that if they don't follow, its probably some trivial thing that they've messed up in their head, rather than a need to think deeply about the statement. But even then, the phrase should only be used when it can't be made more specific.
Upvotes: 3 <issue_comment>username_10: Yes, though it is sometimes abused.
It's an assertion that if you think something is true you've probably got it right. It also sets the tone of what you're about to have to think through. It's common in other walks of life the other way round: if you have something that you are asserting that is non-trivial and is going to take some working though, you give your audience a warning.
In maths, perhaps arrogantly (though unfortunately in my experience accurately), it is assumed that the statements in papers, talks, books, etc are involved. Thus instead of caveating each sentence with "this is hard", the few that are easy are caveated instead (simply as a way to reduce verbosity).
The problem comes when the caveat is incorrect or is only correct after something highly non-trivial has become internalised.
In this case it alienates those readers. As I imagine you can guess it not only hampers progress, it can also be seen as a bit of a middle finger. "If you don't get this you must not be cut out to be doing this sort of thing". Perhaps in some cases this is how its meant or just laziness. However its not always easy to know what will help your readers most and "nothing deep happens here" is far more likely.
I don't like using "trivial" or "clear" because of the potential to be misread, but I have often wanted to express that sentiment.
Upvotes: 0 <issue_comment>username_11: Yes. It specifies a particular level of difficulty (not too easy and not too hard), thus managing the reader's expectations and directing their focus.
The phrase will not be used for completely trivial deductions that can be done in half a second. If the previous sentence concluded that 2x=2, nobody would write "it is therefore easy to see that x=1". It's easy to see that it's easy to see, so there's no value in pointing out that it's easy to see.
Likewise, if the deduction is difficult and requires hours to figure out, nobody would write "it's easy to see", because it's not, and saying it is will confuse the reader.
It's the middle ground where the deduction can take a few seconds or maybe a few minutes, where the phrase is useful. A priori, the reader does not know the difficulty of the deduction - is this a half-second thing and I'm too dense to figure it out? Is this a difficult deduction and the author is remiss for neglecting a proof, or maybe he's just making stuff up?
The statement "It's easy to see" signals that we're in the middle ground - No, you're not stupid for failing to recognize this immediately (if it were that easy, I wouldn't say anything). But yes, I'm confident that if you spend a minute you'll figure it out, so there's no need to encumber the paper with all the details.
So much power in such a simple phrase.
Upvotes: 3 <issue_comment>username_12: Yes, there are good reasons for the *general meaning of phrases like this*, but "*It is easy to see that...*" is a very poor choice of phrase for this meaning. Others have already suggested better phrases.
As username_1 expresses, it's not expected for advanced papers to reference, let alone prove every conclusion used. The problem here is the passive voice, and, more importantly:
*It* is easy to see. *What* is easy to see?
I see two general phrase choices:
---------------------------------
1. **Longer**. If you want sentence flow to present a long thought, "From Y, we observe that with/from [few steps or concepts], X.", which carries the meaning more explicitly.
2. **Shorter**. What is being said is, "Y. Y => X." (Y being the conclusion the reader is expected to know, from the audience the text is written for.)
This way, you are at least specifying the subject of your sentence.
Now, if Y is something you learn in high school, you would look silly specifying it; you would look as if you're proud that you can still do high school math. You write papers for your peers.
It's a safe bet that one or more of your peers won't agree Y is obvious, but perhaps will say nothing in review to not "look dumb", when in fact s/he is your peer, just working in a different field.
Widening the audience slightly from that seems a reasonable place to be.
Upvotes: 0
|
2019/05/12
| 691
| 2,804
|
<issue_start>username_0: A student was subject to a clear Family Educational Rights and Privacy Act (FERPA) violation by a professor.
Does student have any rights against the professor?
How should they proceed when they are the victim of a FERPA violation?<issue_comment>username_1: Such actions are a violation of ethics and maybe of law in the US. But it is the student that must seek redress. The university should have an office in which to discuss such things and to which a student can make a complaint. Encourage the student to explore such avenues. The individual should think about what would be fair redress. I would probably expect a public apology, though have doubts about whether it could be arranged.
Department heads and Deans can also be informed, but such things should be done in person, not by email.
Other, more public and radical, options exist, but it is probably best to explore the official ones first and to be aware of the potential negative blow-back consequences of making public claims even when it is warranted.
Such behavior doesn't belong in academia, of course.
Upvotes: 1 <issue_comment>username_2: In the US, you have the right to [file a FERPA complaint](https://www2.ed.gov/policy/gen/guid/fpco/ferpa/students.html). The instructions are posted on the Government website. If the Government decides that there was a FERPA violation, there are two possible outcomes:
* They agree that the university will bring itself into compliance, or
* They do not agree that the university will bring itself into compliance, and therefore, the university will lose their eligibility for federal funding.
The latter of these is an existential threat to the university; therefore, the first one is almost certain to happen. However, the university could discipline the professor as a way to demonstrate that they take being in compliance seriously.
Beyond this, I think the key question for you is **what is your goal?**
* Students [do not have the right to sue](http://www.mondaq.com/unitedstates/x/18289/Human+Resources/Students+Do+Not+Have+the+Right+to+Sue+for+Violations+of+FERPA) over FERPA violations, so a financial or other settlement for the student is highly unlikely.
* If you want some specific action (e.g., being allowed to complete your degree without interacting with this professor), you should request this through the usual channels (start with the department chair, then the dean). You could mention that you believe this is a FERPA violation, but I would avoid making threats.
* If you're just angry and want "justice," you could also complain to a dean, department chair, or even ombudsman, or could submit the FERPA complaint. It's hard for students to prevail against professors, however, particularly if this is the first complaint.
Upvotes: 3
|
2019/05/13
| 1,649
| 7,190
|
<issue_start>username_0: I have occasionally noticed that an author chooses to share publicly referee reports from a journal, usually because they are feeling aggrieved about a rejection. I also know that now and then people do share reports with (a few) close colleagues or friends.
What is generally considered good/bad conduct regarding sharing reports? I would hesitate to share my reports widely, mainly because I associate that behavior with cranks (as it seems to be cranks who publish them) and would worry that it might come across as sour grapes and unprofessional.
Are there any clearer guidelines or reasons why we should keep the reports private? Is it considered a breach of confidence of the anonymous referee? Speaking generally, the reports are obviously a vital part of our scientific process, so it is strange that they so rarely see the light of day. Are there arguments within e.g. the open science movement to make all reviews publicly available?<issue_comment>username_1: It is not common in my field as well to publish the referee reports. I don't think it is a breach of confidence of the anonymous referee; (s)he remains anonymous anyhow. My guess is that scientists are human too, and when they receive criticism of their work they want to get it over with as quick as possible and then forget about it...
There are arguments for publishing *all* referee reports: An article does not become "scientific" because it is written by someone with a PhD or someone employed by a university, but because we give enough information that anyone can reconstruct how we reached our conclusions. Referee reports are part of the process, so being open about them would fit.
However, I would consider it unprofessional to *selectively* share referee reports. That decreases the openness of the process, as now the reader starts to wonder why this report is shared and the others are kept secret.
I have refereed once for a journal ([BMC Medical Research Methodology](https://bmcmedresmethodol.biomedcentral.com/)) that had a policy of publishing the non-anonymous review reports with the article. I actually liked that process: there was interaction with the authors to make the article better. What is important for this question, is that that interaction is public so the reader can see the development of the article.
Upvotes: 3 <issue_comment>username_2: >
> What is generally considered good/bad conduct regarding sharing reports?
>
>
>
Journals generally dictate their own policies for sharing referee reports. In many cases, sharing the report with anyone other than the authors can be considered a breach of trust. Refer to the confidentiality clause located at the bottom of the email you received containing the referee comments. If one does exist, sharing the report publicly will put you in the wrong and the journal may decline to review future articles authored by your person.
Many journals may not have a confidentiality clause, but still it is your responsibility to check the For Authors section in the journal website before you share the reports publicly.
Therefore, **always make sure you are in the clear before sharing reports publicly**.
Having said this, a statement must be made regarding the the types of review that are there. There are three basic types of reviews:
* Open peer review: You know who is reviewing and they know you too. (Elife)
* Blind peer review: You don't know who is reviewing, but they know you. (Nature Comm.)
* Double blind peer review: You don't know the reviewers and vice versa (I cannot at this moment think of a journal)
I do not know of any Hitchhiker's guide to sharing peer review reports. But, if you are sharing the reports publicly, then in order of decreasing importance:
* If this information exists, remove reviewer names, affiliations and any information that would make the reviewer identifiable.
* Share the full report and quote it verbatim, this will put you in the clear if some misunderstandings arise later on.
* Include a confidentiality clause when sharing the report.
* Our world is very small, it is very likely that someone within your network is your reviewer. So, when sharing the report, do so from the standpoint of an individual who is receptive to the comments.
Be receptive to comments that are within reason, not the ones which say the author should travel to the other side of the universe, gather some pixie dust, de-constitute and reconstitute it and finally compare it to their pixie dust in three technical and 100 biological replicates.
>
> Are there any clearer guidelines or reasons why we should keep the reports private?
>
>
>
Look above regarding confidentiality clauses and journal statements for authors.
>
> Is it considered a breach of confidence of the anonymous referee?
>
>
>
If sharing the report is not a breach of trust from the journal's standing and information in the report does not clearly identify the reviewer, then no I do not think that is a breach of confidence. But, if I were the reviewer who asked you to gather up some pixie dust, I might get offended and consider it a breach of confidence.
>
> Speaking generally, the reports are obviously a vital part of our scientific process, so it is strange that they so rarely see the light of day. Are there arguments within e.g. the open science movement to make all reviews publicly available?
>
>
>
A comment which is warranted, will be made however unjustified the authors may think the comment to be. For example, if you missed your commas or used wrong statistics, that will be pointed out or can be used as a cause for rejection both in closed reviews or open ones. You will get rejected because you don't come across as a professional person.
Open reviews protect authors from unwarranted comments such as gathering up pixie dust or doing 100 replicates of the same experiment.
At the same time, open reviews improve the article when peers from diverse fields provide their inputs, this leads to your growth as a scientist.
Also, I see a possibility for open review to hurt the peer-review process. When highly renowned authors submit articles for reviews containing very high stakes results, it makes a case for reviews to be closed. In an open review when the author has no peer of equal standing, reviewers may show restraint towards the author to protect their own interests, as the author may be a person who is reviewing their grant applications.
**This may be an unpopular view, but academia just like life is an unfair place.**
Upvotes: 1 <issue_comment>username_3: Your question is how should you behave. Not should reviewing be open.
I recommend not to share reviews especially adverse ones. It looks like sour grapes and it violates either explicit or implicit confidentiality.
Plus it just makes you look weak. A better response is just to move papers to other journals. And quickly so soon as you see a holdup situation. Not get involved in long wrangling.
Try to write the papers as matter of fact as possible. Clear writing. Follow notice to authors instructions. Don't try to hide weakness or claim something sexy that isn't.
Upvotes: 1
|
2019/05/13
| 404
| 1,543
|
<issue_start>username_0: I want to follow few journal and conference. IS it possible to create alert for such specific item in google scholar? or is there any other system available for these kind of tasks?<issue_comment>username_1: Many conferences and journals have RSS feeds that you can aggregate with an [RSS reader](https://en.wikipedia.org/wiki/News_aggregator). Usually they are linked on the conference / journal website (though sometimes you need to search for "rss" in the page source because it is hidden somewhere).
Example: the [arXiv math.AG](https://arxiv.org/list/math.AG/recent) site doesn't mention RSS anywhere, but the page source [does](http://export.arxiv.org/rss/math.AG).
Upvotes: 1 <issue_comment>username_2: You can turn any Google Scholar search into a search alert that will send you periodic email updates. So if you have a search for a particular journal, something like:
>
> source:Biogeosciences
>
>
>
You can enter that into the form for creating a new alert at <https://scholar.google.com/scholar_alerts?view_op=create_alert_options&alert_params=hl%3Den&hl=en>
For a journal or conference name with spaces you should enclose it in quotes like:
>
> source:"ACS National Meeting"
>
>
>
You may not get the results you expect if the journal name is abbreviated or truncated in some way (or if it is not released in a way that is crawled by Google Scholar), but once you find the most common way that the name shows up in Google Scholar results, you should be good to go in most cases.
Upvotes: 0
|
2019/05/13
| 587
| 2,582
|
<issue_start>username_0: I've graduated from a Chemical Engineering department and I am planning to apply for a Master's degree. I wasn't involved in research that produced a publication when I was an undergrad. Is there any other option for academic writing that I can use to strengthen my application?<issue_comment>username_1: Absolutely. Peer-reviewed publications are the most valuable, but other writing can show your abilities and interests. For unpublished academic work, I recommend posting it at a pre-print server. In Chemical Engineering, this looks like a good one to reach your audience: <https://engrxiv.org/> . It's becoming more popular lately: <https://cen.acs.org/acs-news/publishing/Chemistry-preprints-pick-steam/97/i3>
You can also write about the field, or science, or science and society. The best pieces go to leading outlets like Scientific American, The Conversation, or as an opinion-editorial to major newspapers. That's hard to do, so you can also target smaller services. At the easier level, you can publish yourself, for example at Medium.com, Twitter.com, or Reddit.com. However, you're unlikely to get as many readers as using an existing news service.
Upvotes: 0 <issue_comment>username_2: When you apply to a Ph.D. or research-based Master's degree program, we often look to see how much prior experience you have with the academic research process. This includes successfully working on research alongside a faculty advisor, and when possible producing the oral conference presentations and co-authoring the scholarship (publications) that document the findings.
When faculty evaluate graduate school applicants, we are excited when we see someone who already has experience with the research process that is fundamental to graduate school and when our peers (applicant's former advisors) can recommend her based on her performance. Producing publications is an incredibly important part of the research process, so it is obviously best when someone has seen work through to that finish line. We also understand that this sometimes just doesn't happen for undergraduate researchers for a variety of reasons.
While it is also helpful for us to simply see examples of your writing, this is not really why we value the publications of an applicant. Don't be too worried if you have *not* yet worked on research leading to publications; you still may be considered a high-potential applicant. Be sure to impress with clear and cogent writing in your Statement of Purpose, on your resume, on your linked webpage/blog, etc. Good luck!
Upvotes: 2
|
2019/05/13
| 1,041
| 4,058
|
<issue_start>username_0: Unless I am mistaken, somehow, if you have defended a PhD then you have a *Doctorate* degree, which means you are a "Doctor". That remains to be the case, even if you leave academia and go work in industry, or travel the world or whatever.
Does the same thing apply for professorship? In other words, is it so that once you receive professorship, you are indeed always a professor? Or is it connected to your employment as a professor?
And of course there is the middle ground, where you might have assistant or associate professorship and choose to leave academia? Do you then revert back to being a "<NAME>, PhD"? Or remain to be "<NAME>, Assoc Prof"?
As a follow up question: Does this vary from country to country as well, or is there a generally accepted rule of thumb?<issue_comment>username_1: In the United States, titles (such as Doctor and Professor) are not protected and anyone can use them. In my opinion, referring to yourself as "<NAME>, Assoc Prof" if you are not actually employed (anymore) as an associate professor would be a bit odd -- but it's not illegal.
In most other (not US or Canada) countries, the title "Professor" is reserved for full professors. An assistant or associate professor does not get to call themselves professor. Somebody who retires from a Professor position is often referred to as Professor [Emeritus](https://en.wikipedia.org/wiki/Emeritus), and so (in some sense) gets to keep their title.
Upvotes: 1 <issue_comment>username_2: In the US there are no legal rules. I think that few people would use *Associate Professor* as a title, but rather as a description of the post they currently hold. They would probably use Professor as a title, however. It is a *descriptive title* even before elevation to Full Professor. Students normally call them "prof" in many places. But the "title" is descriptive, not formal. It is a term of respect when used in this way.
However, if a person leaves academia for some other profession, it is unlikely that they would use professor and variations as a title. They might describe themselves as "former professor".
After retirement you might be "professor emeritus", again a description. On your tombstone you might be "Dr. Stone, Professor of English", the final description.
Other places are more formal about such things, but in the US, not so much.
---
I am actually "<NAME>", which I interpret to mean that I didn't unforgivably disgrace myself before I retired.
Upvotes: 2 <issue_comment>username_3: In The Netherlands, the title professor (prof.) is protected and is only allowed to be used by a hoogleraar (US equivalent: full professor). Once retired, they are allowed to use the title emeritus professor (em. prof. or prof. em.). This is not allowed if they stop before their retirement.
So:
>
> Does the same thing apply for professorship? In other words, is it so that once you receive professorship, you are indeed always a professor? Or is it connected to your employment as a professor?
>
>
>
Only when you are employed as a full professor or retired.
Read more at <https://en.wikipedia.org/wiki/Academic_ranks_in_the_Netherlands#Full_professors>
Upvotes: 2 <issue_comment>username_4: In the UK there are many "visiting professors" at a university who have never actually held a professorial post in that or any other university. There is nothing in English law that would forbid them to use the title "Professor" and some of them do use it and are given the title in official documents, for example, when they give evidence to a parliamentary committee.
But I have never heard of anyone who might once have been an assistant professor, but no higher, use the title later on. Frankly, it is just too junior a title to boast about.
England being England, however, beware of laughing at academics who boast that they are or were "Students" of Christchurch, Oxford. In that college, in that university 'student' has a technical meaning much more glamorous that one might think.
Upvotes: 1
|
2019/05/13
| 570
| 2,251
|
<issue_start>username_0: As scientists, researchers and scholars, how do you feel when you read a scientific document that contains a lot of conjunctive adverbs, like ***however***, ***in addition***, etc?
Does it sound non-scientific to you?
If you did not notice it in other papers, then do you use these adverbs in your scientific documents?<issue_comment>username_1: Doesn't it depend on the context? I don't understand why you think it might be un-scientific. "However, at temperatures above 120C, the solution is liable to explode catastrophically". That might be worth saying in a scientific document.
Other literary phrases have a similar effect when the intent is to inform the reader. They also tend to mentally "flag" important phrases so that they are less likely to be overlooked. The word "however" especially gives the reader an immediate hint that the next phrase or sentence is somehow different from what has come before. A contrast is being presented. It "primes" the reader to consider the distinction. Other such words also have a traditional "flagging" use, though different from that of "however".
The language is what it is. But each field has certain phrasing that is common there and the writing might seem awkward when it is abandoned. I'll guess, without evidence, that "however" is less common in math writing, than in, say, philosophy, where alternative arguments might need to be introduced. In mathematics, a paper will tend to drive inexorably toward a conclusion. However, in philosophy alternate views are often introduced so as to find a way to some truth. (You see what I did there?)
Upvotes: 3 <issue_comment>username_2: I think connecting words are fine. Some people say they are superfluous, but I disagree with this view, especially if stated too strongly. The benefit of connectors is in showing a clear path, leading the reader along. Connectors are not the only part of having a clear structure, or the most important, but they play a role.
See the last paragraph on page 6 of Katzoff Clarity in Technical Reporting:
<https://ocw.mit.edu/courses/media-arts-and-sciences/mas-111-introduction-to-doing-research-in-media-arts-and-sciences-spring-2011/readings/MITMAS_111S11_read_ses5.pdf>
Upvotes: 1
|
2019/05/13
| 643
| 2,723
|
<issue_start>username_0: Like so many others, I scrape data from [Google Scholar](https://scholar.google.com) as a part of my lit review process, so that I can have a structured data set for meta-analysis of the literature.
I noticed that for a couple of many topics of interest, the # of articles per year seems to be increasing until 2017, then it drops off sharply.
I wonder if it's really safe to assume that fewer articles were published in 2018?
Is it possible that this means that the data through 2017 is relatively "complete", whereas journals and authors for 2018 may still be in the process of being added to Google's index, thus the total number is under-reported?
Has anyone encountered this?<issue_comment>username_1: Google Scholar has its strong points (e.g. indexing of grey literature that is not available in any regular scholarly database), but data quality is not one of them. Of course, this is not because Google lacks the ability to create a high quality database; it is rather because publishers refuse to grant it permission to create a high-quality database that it distributes for free. Google's index is based on Google Scholar's web spider whose completeness depends on what is available from public websites (Google strictly respects websites permissions; it makes no attempt to index anything where the websites ask it not to do so with a robots.txt entry). I would not be surprised if some publishers restrict Google's permission to index details of some of their most recent publications.
With that perspective, then for any given topic, if there is a sharp dropoff during or after 2017 (its unclear which is the case the way you worded the question), I would not consider that evidence of anything. That is, it is not necessarily evidence that people suddenly stopped publishing on that topic; it is only evidence that Google's index no longer contains that topic, for whatever reason. I know that I've seen quite a few articles that have charts like that and make claims like that, but I don't consider such claims reliable. (And when I peer-review articles that make such claims, I tell the authors so.)
To make any concrete, serious claim about change in publishing patterns of topics, you would need a more rigorous and systematic database source (such as Web of Knowledge, Scopus, etc.) and at least a two-year lag to make sure that all data is complete.
Upvotes: 4 [selected_answer]<issue_comment>username_2: You might find this open access resource helpful: Which academic search systems are suitable for systematic reviews or meta-analyses? Evaluating retrieval qualities of Google Scholar, PubMed, and 26 other resources (<http://dx.doi.org/10.1002/jrsm.1378>)
Upvotes: 2
|
2019/05/13
| 784
| 3,425
|
<issue_start>username_0: I am finishing my thesis (Physics) and need to add a link to all the codes I created to do the analysis presented in the manuscript.
The codes are not polished and they are intended to be public eventually but not just yet. So I need a platform that will allow me to have a private project that can be seen through a link for the review committee and for other students to see only. Github and GitLab do not offer this and can't find a webpage that does.
There are a lot of codes and they are separated by folders to distinguish the different areas in which they were used. So whatever webpage needs to be able to have a similar structure than GitHub.
Does anyone know where I could find this sort of feature? Thanks!
PD: If you know of the appropriate tags to use in this question I would appreciate you letting me know which ones those are.<issue_comment>username_1: Option 1
--------
Based on your requirements, it doesn't seem that you need the code to be directly accessible from a browser. So the traditional solution would be to create an archive (e.g. zip file) protected by a password, and make this file accessible for instance through your institution website or Google Drive, Dropbox etc. Then you give the link and the password to whoever you want and they can download and browse it on their own computer.
* Pro: protected and accessible through link
* Cons: not accessible from a browser
Option 2
--------
As an academic, you can have [private Github repositories for free](https://help.github.com/en/articles/applying-for-an-educator-or-researcher-discount) (probably also on other platforms). The people who need to access your code will need to have an account themselves, and you will have to explicitly grant access to them in the Github interface.
* Pro: protected and accessible from a browser
* Cons: requires people to create an account if they don't have one already
Option 3
--------
Since your code is meant to be made public eventually, you could simply create a public repository and give the link to the people who need to access it. It doesn't really matter that the code is not polished or even finished, Github and other platforms are designed for projects under active development anyway (and you can mention "work in progress" in the README file to make this clear). Additionally it's quite unlikely that anybody would find your code among the millions of repositories, unless you advertise it in a paper or webpage.
* Pro: accessible from a browser, no need for an account
* Cons: accessible to anyone who finds it
Option 4
--------
I hesitated mentioning this one since it's kind of hacky, but for the sake of completeness: [Overleaf](https://www.overleaf.com/) is a platform meant for sharing Latex documents. Like github, it is backed by git, so as far as I know you can actually create a directory structure and push any file there. Then you can share the repository as a link, which contains a long random key so it's impossible for somebody to find it by chance.
* Pro: private, accessible from a link, no account required for people to access it
* Cons: kind of a hack
Upvotes: 2 <issue_comment>username_2: If you just need to share a single file of code, and the file directory of the project does not matter, www.pastebin.com allows you to publish text files onto the Internet and share them with people by giving them links.
Upvotes: -1
|
2019/05/13
| 2,034
| 8,775
|
<issue_start>username_0: Basically, I coded several assignments and a friend turned in code which looks almost identical. I didn't give him my code, and, as far as I know, he didn't even have any way to access it - but it happened somehow. Anyway, the professor said that he can't prove that I let him have the code, so he won't/can't fail me for the class. However, he'll still give me 0s on the assignments he suspects cheating on.
Is there anything I can do? I already went over the code with the professor one on one, and proved with little doubt that I coded it myself. If the suspected cheating can't be proved, does he have the authority to give 0s on those assignments? Would an ombudsman be able to change the situation? I don't want to push this too hard and end up digging myself into a hole where an authority may decide to fail me (if that's possible), but I also proved that I coded those assignments and deserve a grade.
The professor said that if he figures out how he got my code (supposedly if he sees that I didn't willingly hand it over to him), then he'd give me the points back. But since the case is already going to be dismissed, is that a decision that he can make or something I can fight back?
I'm sure this all varies based on different universities and their policies, but any advice would be appreciated, thanks.
EDIT: I was mistaken to say that the case is going to be dismissed - Its most likely going to be though.
EDIT2: Between advice here and what I've found out on my end, I think I know what my options are and what I can do.<issue_comment>username_1: It seems that you are being treated unfairly, but it is a situation that can only be judged and handled locally. Your university probably has appeal processes and you can always go to the department head.
In programming as in mathematics there is often only one clear way to do something and if different students do "the expected" thing, then their programs come out similar - occasionally very similar.
"Suspicion" of cheating should never be the final determinant.
But no one here can help you. Seek a solution locally.
Upvotes: 6 <issue_comment>username_2: If you did the assignment by yourself, then try asking your teacher to take the total points from the assignment/s he suspects you cheated on and add those points to your next quiz/test that covers those assignments. This way you have a honest opportunity to earn those points back and prove that you know the material covered in those assignments.
I know some teachers will do this if a student simply didn't turn in an assignment because they were sick, perhaps this same solution can be used for your situation.
Upvotes: 2 <issue_comment>username_3: You say you've already talked to your professor 1-on-1 and showed that you indeed coded the assignment yourself. If so, I suggest getting your professor to also talk to your friend 1-on-1. If your friend can also show that he coded the assignment himself, it would be sensible to give both of you the points. If your friend can't, then you have a good case for why he should be getting the zeros, and not you.
Upvotes: 4 <issue_comment>username_4: I have heard about professors threatening all kinds of stuff because of alleged cheating. In our university, such cases may not be judged by the professors or teachers of the course (and doing so, especially making threats can get them into trouble). Instead, they are required to present it to a commission consisting of a few higher-ups in the department.
I suggest you look up how this works at your school or university. I imagine you can ask your mentor or adviser (or just another professor / teacher you trust) what the procedure is there.
It shouldn't be a problem to ask what the procedure is, and if you like what you hear you can always try to make your case. Though I'd informally explain the situation to someone you trust, e.g. mentor, adviser, etc. first. If you are willing to make a case, you can probably tell the teacher beforehand, if they don't think they have a case they might drop it all together because they don't want involve others (especially if they have little proof).
Once you do involve other people (teachers / some commission), make sure to make your case as tight as possible. If you can prove that you wrote it (e.g. a cloud service that shows when you saved, Github commits or chat logs showing your worked on it together) then that can help you convince those who are judging the case.
Upvotes: 6 [selected_answer]<issue_comment>username_5: I have had a similar issue before, where I coded something that was extremely similar to something somebody else made. When confronted by the teacher, we went back and forth for a bit until I turned the project over to him and told him to ask me anything about the code and why I made the decisions that I did.
If you wrote the code yourself, you will be able to explain in detail your thought process and what your code is about. This should be enough proof that even if you didn't write the code yourself, you still have enough knowledge about the problem/solution for the teacher to accept it.
Upvotes: 2 <issue_comment>username_6: Our policy is that there cannot be a penalty for academic misconduct without handing in a report of the incident. Instructors can do an "Instructor Resolution Form", meaning the instructor and student agree on the penalty and nature of the incident, or an "Instructor Warning", which means just about the same thing, with less penalty. Both are reportable events, and enter into discussions of penalty for future findings of academic dishonesty.
Other than this, if the students don't agree, then they are free to request a hearing.
I guess my point is that raising a stink about this brings on the possibility that the incident is handled through more official channels, where you may be exonerated or not. I suspect if you're in the US that your school policies are similar to mine (because of all the liability concerns), and your instructor may be misusing the policy *in your favor*, as the suspected incident is not being reported.
FWIW, if the situation is as I suspect it is, I would discourage your professors decision, and would recommend handling the incident officially.
Upvotes: 2 <issue_comment>username_7: It could be as simple as checking the creation dates of the files in question. While these *can* be spoofed, it's likely that someone who was so lazy as to copy the files in the first place would also be too lazy to think of and then change the dates.
Upvotes: 1 <issue_comment>username_8: This is a lower division computer science class, and the professor thinks that grading policy is acceptable? Seriously? Lower division computer science homework is convergent, almost the same way as a sheet of math problems is convergent. Expect multiple people to turn in very similar work. There are documented cases of identical work turned in by different students.
This grading policy is simply wrong, and I recall having to argue a few of these myself. Fortunately, my reputation preceded me and there was no doubt I was perfectly capable of the work and had no reason to cheat. I would argue the policy the first day of every class that had it in the syllabus on purpose, because I was a target and the electronic turn-in mechanisms were vulnerable. My professors continued to hold the policy, but not for my homework.
And oh yes, I reported a definite case of another student helping himself to my homework off the network servers. Unfortunately I couldn't tell whom it was.
Thankfully I never had to go to the dean about it.
Only later did I learn about the convergence of lower division computer science homework. Well, I should have known in the first class I took. All perfect grades should look the same except for comments. I only escape because my word choices are really unique. If I didn't bother to comment ...
Upvotes: 1 <issue_comment>username_9: ### Talk to your student union representative
A Professor is in a position of power; an individual student, to the university system as a whole, is a passing concern, and will often be ignored, even if s/he has been mistreated. However, the student *body* is large and stable; and if it is organized, is often able to exert counter-pressure if necessary.
Your student union representative should have some experience with similar situations, or at least easy access to people who have it; and they should have both the ability and the venue to bring up such issues - or at the very least the option of public protest.
Also, they will know your university or department's rules, guidelines and procedures, and will thus be able to give you better advice than us even for acting individually.
Upvotes: 1
|
2019/05/13
| 470
| 2,092
|
<issue_start>username_0: I've recently sent an article to a conference which is supposed to send the review result by June 1. I'm supposed to participate in an interview in which the acceptance result of my paper in this conference will have a good impact on the result of the interview.
I want to know is it OK to send an email to this conference and ask them to respond earlier (for example 1 week earlier)? If so, what is the best way to express my request? Will it have a negative impact on the result of the review?<issue_comment>username_1: You can ask, of course. It is almost never wrong to ask. But I doubt that the conference committee will have any way to accommodate you. The one thing they could do is do a quick analysis and reject if they thought it was an especially poor paper. Don't fear that they would do that for the wrong reasons, but it would give you a quick answer. You might also get encouragement from a quick look, but almost certainly not quick acceptance.
The problem is that the committee has little control over the reviewers and their schedules.
Again, it isn't that your review would be prejudiced, but just that it is easier, in most cases, to recognize an inappropriate paper than a good one. The good ones are usually discussed in committee while the schedule is being filled out and there are more "acceptable" papers than will fit the schedule.
Upvotes: 0 <issue_comment>username_2: This is probably not a reasonable request. You can ask, but I doubt it will make a difference.
Firstly, the program committee is already working to make decisions as quickly as possible. If they could expedite the process, they would. But there are constraints on the time of reviewers.
The thing you need to note is the conferences typically make decisions for all papers together. It is generally not the case that papers are decided on one by one. Thus they cannot expedite your decision ahead of other papers. (And if they could, why would they?) This may simply be because they have a fixed number of papers that they can accept.
Upvotes: 3 [selected_answer]
|
2019/05/14
| 1,130
| 4,795
|
<issue_start>username_0: My long-term goal is to work at a research institute. While discussing my career plans with different mentors, I find that there is some confusion about whether a position at a government-funded research institute is still considered "academic." Although my question is mostly semantic in nature, I imagine there are some implications with how to market myself (i.e., what to include in a CV or how to describe myself on LinkedIn). Since many people with posts at research institutes either come from universities or go on to become professors, I think this question is relevant for this forum.<issue_comment>username_1: I don't have statistics, but I'd guess that if you asked the question of individuals working in such institutes it would depend on exactly how you asked the question. I would think that the quick answer would be no, but a more reflective answer, when provided with some "definition" of academic might be yes, but would depend on that definition.
I'm also going to guess that only a small proportion of members here are from such institutes but not associated with a university.
Of course, if a person moves from an institute to a university they would then likely self-describe as an academic.
But if you just ask "what are you", you'd be more likely to hear "researcher" than "academic".
But, as you say, it is mostly semantic, given that many academics do just about the same sort of thing as institute based researchers.
The situation is complicated a bit, of course, since many such institutes are associated with universities - some very closely. It might be further complicated in situations in which industry researchers have some "training" duties, even serving as committee members for graduate students.
Upvotes: 5 <issue_comment>username_2: **As background**, I work for a US Federal government agency. I am physically located at an agency center, but hold *courtesy* adjunct appointments at two universities where I serve on graduate committees, mentor undergraduates, collaborate with faculty, and attend seminars. Additionally, my agency has [cooperative research units](https://www.coopunits.org/Headquarters/) (or coops for short) located at universities.
We have a fair number of cross-overs where we hire faculty to join us as researchers or have our researchers leave to join university faculty. This is true both at the post-doc level and more advanced levels. Personally, when I started as a post doc with my agency, I did not know if I would become permanent staff (I was lucky and did, in my case).
**Answer to your title question:** I consider myself part of of a broader academic community, but more tangentially involved rather than an academic. So, *no* I am not an *academic*. However, my colleagues who are part of the coop units, generally consider themselves to be academics because they are fully embedded in universities and must hold academic professor positions as part of their jobs (and they get both \*.gov and \*.edu email addresses).
**Answer to your underlying question about how to market yourself:** I am a research scientist. Many parts of my job are similar to a research-focused academic (e.g., I plan and conduct studies, I apply for funding, I publish, etc). I would sell yourself as a *researcher*. That's the role you want fill. For example, at a research institute, you would be conducting research. Or, at a university, you want them to hire you to conduct research. Last and pragmatically, if you're not able to get in with a research institute or university, industry hires *researchers*, but industry almost never hires *academics*.
Upvotes: 5 [selected_answer]<issue_comment>username_3: I am not sure that debating [the meaning of words](https://www.etymonline.com/word/Academe) is very useful in itself (the exact connotations will even differ between languages!), but perhaps there is some value in mentioning this:
Research and teaching are not really separable.
Wherever there is research being done, you will find students (usually called "graduate students"). Many of these institutes award PhD degrees (or collaborate with a university so that students from the institute can get a degree).
Many believe that *it is a core responsibility of researchers to pass on knowledge* (not merely to generate it). Look up any person on Wikipedia who famously contributed to knowledge, and the summary box will typically have a "Notable Students" section. Most researchers in the institutes where I worked do teach in a classroom setting occasionally, even if this is not a job requirement.
The concept of classroom teaching is a modern thing. Historically, words like *academia* were not exclusively associated with this particular method of passing on knowledge.
Upvotes: 0
|
2019/05/14
| 3,319
| 14,601
|
<issue_start>username_0: I started my PhD a few months ago.
Seriously, I don't know how I got accepted, but here I am. I am not a computer scientist, but come from a different STEM background. My knowledge in machine learning in general is pretty limited (for now, trying to catch up) and publishing something seems so far out of reach that it's overwhelming. I don't even know where to start. I'm reading papers on semantic segmentation and similar topics (which is the direction my topic goes in), and while I understand most of them technically, I completely lack the intuition to ask good questions about them, let alone identify some gap in which I can dive into to publish something. I read them and think "And now what?"
I don't dare to talk with my supervisor about this. She will think I am completely stupid and not suitable for the position.
Can someone suggest a pathway for me to somehow get the ball rolling and systematically work towards successfully graduating with a PhD? I'm really willing to put in work, it's just that right now I have the feeling I'm stumbling around in the dark and it's not productive at all.<issue_comment>username_1: **Talk to your supervisor**. Your supervisor accepted you, and therefore she must think that you had something to offer her research group. Don't phrase it like you did here. Instead, come to her with solutions and not problems. For instance, you need to learn to ask good questions. That's great. So why don't you ask her to recommend a good dataset that is common to use so that you can start exploring different methods?
Ultimately, you are in a PhD program and one of the biggest things you need to learn is how to read the literature to discover the open problems. This requires you to read far and wide. A big thing that will probably help you is to try to replicate results. You aren't doing this for publications, but to better understand what was done and why, and what the limitations were. Take a recent paper and try to recreate it. If you can't, go to a less recent paper that that one cites and so on. Build your way up.
For what it's worth, I think that a lot of PhD students come in with a general understanding of the field but not much of a deep understanding. As an anecdote, in the beginning of my PhD I'd find a paper that was written years before I started and think "Oh no! They've solved all of the problems I thought were important!" I'd rush to my supervisor to show him the brilliant research, and he'd show me the limitations and how they were being addressed. It seemed to me at the time that these papers solved their problems, but by spending time speaking to him I realized that that was not true - perhaps the data set was particularly simple, perhaps the researchers didn't try to expand their results too far, perhaps the domain in which the algorithm worked was so restricted that it was of no practical use.
Eventually, I learned how to do that on my own, first by looking for common obvious problems, then by thinking more subtly as I got better - and as I started doing my own research and making my own mistakes.
Upvotes: 8 [selected_answer]<issue_comment>username_2: What you experience is quite common, even for people who have better fitting backgrounds. The way to handle a strange (as in unknown) field is to implement what has been done before. Start with common knowledge items. Try to run kNN on iris dataset. Make sure you get the same result as others. Apply the same for other datasets, observe the results. While you are doing this you will get the intuition about what you are doing.
Finally, move to more recent published work. And at the end of the frontier, you are bound to find gaps that need filling. It may take time, but if you are determined enough you will be able to get there.
Upvotes: 2 <issue_comment>username_3: First of all, I know exactly where you're coming from. I also started a PhD in machine learning, whilst coming from a different field entirely. I knew next to nothing about computer science, I did not even have any math background to speak off. I have never before and never again felt as stupid and incompetent as I did in the first year of my PhD. I am now about to graduate with several high impact publications, so stuff can work out just fine, even if you feel like this.
So here are the things I would do (and did):
1. Really do talk to your supervisor, reluctant though you may be. I can almost guarantee she thinks your struggle is completely normal and expected. She is in the ideal position to give you essential papers to read and tell you what you should understand about those. She can also help you formulate your first research questions.
2. When you do read papers, focus on understanding the concepts, rather than all the details. Especially at the beginning trying to understand everything can become overwhelming quickly if you don't have the right background. Also try to think for yourself what the limits of the research are and what next steps would be. That kind of thinking just takes practice and experience. Both will come with time.
3. Practice the practical things. This can also greatly help your understanding. In machine learning, for almost every problem and method there's an example dataset/tutorial. In the beginning just working through this and playing with parameters can really give you a feel of how an algorithm works.
But most of all, relax and give yourself some time. Yes, you will need to work hard to get to know the field, but don't panic. Understanding will come. And use your supervisor, that's what she's there for. Every supervisor knows new PhD students take some investment and time to become useful. And even in the very unlikely event she would not want to help you out with this, that's better to find out now than a year or more in. Good luck!
Upvotes: 5 <issue_comment>username_4: Apart from all the great answers here I have one additional suggestion and some encouraging words.
A PhD is as much an education as it is a job, perhaps even more so the first. Therefore it is also quite common to follow courses both on general skills needed for a career in academia as well as specific courses on the topic you are working in.
Therefore I would propose to try and find a high quality course on machine learning to familiarize yourself with the field and gain skills to start working with. One that takes 1 to a few weeks at most. I myself have taken such courses on other topics which helped me a lot. If those are not an option, explore online courses.
I have been in a similar situation of feeling utterly lost during my PhD and also afterwards. It wasn't until I taught myself how to program and accomplished developing my own software project that I finally felt I had something to contribute to science and became more confident about my skills. The longer you work in this field, thee better you realize that the learning never ends and also that nearly every scientist has a piece of insecurity inside them. They are only human after all.
Working in academia can be overwhelming and you have to accept that you can never obtain all available knowledge. Especially in the early years of your career this sensation can feel paralyzing and your supervisor hopefully knows or recognizes that feeling too. You will have to find your own way how to not let that stop you and as your learn more and more by trial and error as well as small successes you will become more confident. With this gain in knowledge, experience and confidence you will also start having your own ideas and opinions and spot gaps in current knowledge more easily, but these things take time.
The most important thing now is to work on obtaining enough knowledge to understand what it is you are supposed to do, and formulate for yourself why you are doing it, why it is relevant. The latter can help you feel more confident and motivated to achieve your goals. This will also help you determine whether the goals set are realistic or an insurmountable mountain.
I do not know how strong your supervisor is skilled in this field. Sometimes it happens that projects are set up with goals that require skills which are outside the range of expertise of the PI, and they possibly underestimated how challenging the task is. So if this is the case for your supervisor, it could be that she not purely overestimated your abilities, but also or instead underestimated the difficulty of the research project for someone with no or little experience with the techniques (machine learning in this case).
A healthy dialogue with your supervisor is therefore crucial to identify obstacle of all the varieties I described above early on in order to take timely steps to either fill the gaps in knowledge, identify limitations, evaluate how realistic the goals are and if needed adjust the course of the project towards a successful compromise if possible.
Upvotes: 2 <issue_comment>username_5: It might be the most important skill in a PhD to:
(i) understand what you do not understand,
(ii) find out if you need to understand it, and then
(iii) find out how to understand it.
For example: (i) if I am not understanding anything about the papers I am reading (happens all the time), I know I will have to dedicate months to grasp the basic keywords properly. This usually requires me to tackle such concepts from many different angles in basic frameworks, take pauses and come back at them, to let my brain "absorb" such concepts (ii)-(iii). For me, talking to people without fearing my own ingnorance is really a conditio sine qua non (ii)-(iii), especially if I need a tailored explanantion. If I feel new to the entire subject, I will put my best effort to talk to people to understand the way one thinks about the subject, at a more philosophical level.
(I did not address any machine learning aspect, as I interpreted the question as a more general PhD concern)
Upvotes: 3 <issue_comment>username_6: Find a simple machine learning package with a tutorial. I am not going to suggest one as that would be out-of-scope, but obviously Google is going to be your friend in that respect. It doesn't have to be the final one you will be using; it is just to get the ball rolling, so probably your emphasis should be on finding something with a good set of samples.
Install it. Follow the tutorials. Play around with it. You should get some idea then of how machine learning actually works. Try and link up the practical stuff with the theory that you have read.
Move on from there. Maybe try another package or more complex exercises. Try and steer things in the direction of your chosen project.
Upvotes: 2 <issue_comment>username_7: The short answer: If you want to "do it yourself", I'd recommend a learning pathway from an online learning platform such as Udemy.com (my current favorite).
The specific one you do isn't the important thing; what IS important is that you begin creating project**s** of your own. You will feel competent very quickly after making even 1 fully functional project from the ground up. Optimizing it and making others will be multiply the effect. If this is your full time focus, in the span of 1 week you can feel like a completely different person in this field.
Second, and more important is what most responders have been saying: talk to your professor. You don't have to present answers to your own problems, but *do* have to be able to articulate what the problem is and any thoughts that might lead to a solution. You could even present suggestions from this thread!
Don't fret. Anyone that knows something had to learn it at some point. Kudos for taking that dive into your field of interest.
Upvotes: 2 <issue_comment>username_8: Maybe I don't have much to add on top of previous commentators I am in your shoes at the moment.
I am learning how to learn.
First you have to identify an area in machine learning where you want to focus on (machine learning core or machine learning applications)
focusing on one area does not mean not reading or ignoring the other area completely, you still have to have knowledge about it.
Then get a bit deeper in that area.
read more of Survey papers ( it will help a lot ) instead of general reading.
If you have someone to ask ( like your supervisor ) that is a big advantage.
I am sure you will be ok with the time. just don't give up.
Upvotes: 1 <issue_comment>username_9: What an irresistible question. I hope you've got enough encouragement to go on learning from the other posts. But as a scientist, you will have residual doubt that a few examples don't make a rule. I personally would hate the patronizing tone.
So, I would like to nudge you to think about what is "the purpose" of your PhD, as it might influence your strategy going about it. The most important variable in this respect, I claim, is whether you are determined to pursue an academic career, in which case take note of the rules of this game. If it's more of a personal quest, more power to you.
To answer how to "systematically work towards successfully graduating with a PhD" to the point, consider the counter question: What are the major determinants? Of course, there is coursework and study, etc., but ultimately it is your supervisor's call. So, unless codified elsewhere, you could try to talk to your supervisor specifically about this.
If I was your (possibly old fashioned) supervisor, it'd say "demonstrate the ability to conduct your own research". This is where some of the other answers come in. At least that's more than my supervisor had told me.
Godspeed. Many of us envy your station.
Upvotes: 2 <issue_comment>username_10: Just wanted to add to the many fine answers and comments: as a brand new Ph.D. candidate (many years ago), the director had us all sitting very informally and assured us that "none of you are a mistake" and that everybody there had been chosen for their unique background and contributions they could make.
Much later on, I found out about "imposter syndrome," and I think that is a very real issue that many students face, and must overcome. So much of completing a doctoral program is getting out of your comfort zone to address work and move towards completing little goals that will later add up to bigger work products.
I highly recommend you talk to your adviser, and seek out a mentor: somebody who has a Ph.D. and knows what you are going through, but isn't in your "chain of command," so you can talk about these issues and not worry about how you are seen inside the program.
Upvotes: 2
|
2019/05/14
| 343
| 1,425
|
<issue_start>username_0: I recently submitted my thesis for viva and have since noticed that by transferring from one computer to another endnote has changed my references. The end result of this is that there are 3 extra references at the beginning of the list (which throws out all the other references by 3). Is there anything I can do about this? Are my supervisors allowed to contact the examiners and make them aware? Or do I need to suck it up and face it head on at viva.<issue_comment>username_1: Whether anyone will notice/care and what, if any, the consequences are, is completely up to your institution/the examiners.
However, if your supervisors are reasonable people, it's the best to ask them. Maybe they know what can or should be done.
Upvotes: 0 <issue_comment>username_2: Don't worry, it's a very minor issue. In the worst case it will be part of some minor corrections you'll have to do after the viva.
It's important that you don't contact the examiners yourself, as this might be against the rules. Instead talk to your supervisor, they will know what can be done in compliance with the institution rules; and even if they don't they will know who to ask at least.
As far as I know it's usually fine for the supervisor to contact the examiners, but rules may differ by institution, and since the PhD viva is an official examination it's crucial to do things by the book.
Upvotes: 2 [selected_answer]
|
2019/05/14
| 511
| 2,059
|
<issue_start>username_0: I'm writing an article containing one main theorem, with the other results essentially corollaries of that theorem. The largest part of the article is thus spend proving this main theorem.
It feels natural to me to write the prove as one long "story", most of which is told outside of "proof environments". So something like this
>
> bla bla, consider this and that construction. Notice that bla bla. But we saw before that such and such, and conclude:
>
>
> Proposition 5: (concisely write down the conclusion from the arguments laid out above)
>
>
> *(there is no proof environment here, the "proof" was essentially given in the block of text above the proposition)* Keeping this mind, we now observe that bla bla...............
>
>
>
So this style results in a section without any actual proof environments, and propositions/lemma's in the section are essential "milestones" in the argument.
Is this style frowned upon, or considered annoying/difficult to read? To me it feels more natural than the more common style of mostly just a string of lemma's leading to a theorem.<issue_comment>username_1: Whether anyone will notice/care and what, if any, the consequences are, is completely up to your institution/the examiners.
However, if your supervisors are reasonable people, it's the best to ask them. Maybe they know what can or should be done.
Upvotes: 0 <issue_comment>username_2: Don't worry, it's a very minor issue. In the worst case it will be part of some minor corrections you'll have to do after the viva.
It's important that you don't contact the examiners yourself, as this might be against the rules. Instead talk to your supervisor, they will know what can be done in compliance with the institution rules; and even if they don't they will know who to ask at least.
As far as I know it's usually fine for the supervisor to contact the examiners, but rules may differ by institution, and since the PhD viva is an official examination it's crucial to do things by the book.
Upvotes: 2 [selected_answer]
|
2019/05/15
| 481
| 1,794
|
<issue_start>username_0: I know it's normal in a lot of European countries, but I don't know which way the Swedes roll.<issue_comment>username_1: Swedish Wikipedia does not mention a photo: <https://sv.wikipedia.org/wiki/Curriculum_vitae>
This website mentions that a picture is not required in the cover letter and does not mention the issue at all in CV: <http://www.yourlivingcity.com/stockholm/work-money/living-sweden-swedish-cv/>
No pictures at <https://www.cv-mallar.se/> or <https://akademssr.se/dokument/cv-exempel>
From nearby countries, photos are not be assumed in academic settings in Denmark, Finland, or Norway (based on personal experience). They do seem to be more prevalent in German-speaking countries, which do have close cultural ties to the Nordic countries.
Overall, it seems that a picture is not required, but, according to a comment from xLeitix, it would not be unusual to have one.
Upvotes: 3 <issue_comment>username_2: No, it is not common in Sweden to add a photo on your CV. I have even heard that it is not advised. The reasoning I have heard was to keep the CV barebones and down to the facts. They should evaluate you based on whats on paper, rather than how you look. It goes a bit hand in hand with general Swedish approach of neutrality/objectivity.
In Denmark, on the other hand, it is almost expected to have a photo on your CV. The reasoning there, albeit anecdotal, is that when a hiring manager or a recruiter has your CV they know who to look for in a room/career fair etc. You sort of establish a first contact, of sorts.
That being said, in academic context, it does not really matter much. Most academic labs recruit internationally (at least on paper) and would not expect you to follow the local traditions in job applications.
Upvotes: 1
|
2019/05/15
| 4,269
| 17,264
|
<issue_start>username_0: In the commercial world, Windows utterly dominates. As of time of writing its market share is [somewhere around 85%](https://www.netmarketshare.com/operating-system-market-share.aspx?options=%7B%22filter%22%3A%7B%22%24and%22%3A%5B%7B%22deviceType%22%3A%7B%22%24in%22%3A%5B%22Desktop%2Flaptop%22%5D%7D%7D%5D%7D%2C%22dateLabel%22%3A%22Trend%22%2C%22attributes%22%3A%22share%22%2C%22group%22%3A%22platform%22%2C%22sort%22%3A%7B%22share%22%3A-1%7D%2C%22id%22%3A%22platformsDesktop%22%2C%22dateInterval%22%3A%22Monthly%22%2C%22dateStart%22%3A%222018-05%22%2C%22dateEnd%22%3A%222019-04%22%2C%22segments%22%3A%22-1000%22%7D).
I don't have similar statistics for academia, but my personal observation is that most professors seem to prefer Macs, especially on personal devices. Linux is also a lot more common than in the public at large. Windows computers are still around, but much less common - maybe around 1/3 of computers run Windows, a substantial fraction of which seem to be because they're preinstalled with Windows.
Why the preference for Macs / Linux?<issue_comment>username_1: It's not entirely accurate to state that Windows 'dominates' in the commercial world. While this is generally accurate in Desktop and Laptop computers, keep in mind that the [vast majority of web (and other) servers are Linux-based](https://en.wikipedia.org/wiki/Usage_share_of_operating_systems). Keep in mind that 100% of the [TOP500](https://en.wikipedia.org/wiki/TOP500), a list of the 500 most powerful computer systems in the world, are on Linux. Keep in mind that Linux, especially if you include Android, is almost definitely *the* most prevalent operating system (kernel) in the world in terms of number of devices.
A lot of academic institutions, especially in more computing-related fields, are big supports of open-source/FLOSS communities, and Linux is obviously a very big underlying part of such communities.
And, most key in my experience, for academic/scientific environments, Mac and Linux have enormous computational advantages over Windows - the underlying Unix architecture model and the inherent support of very useful shells like `bash`, very useful device handles (eg. `/dev/random`, or device IO handles), etc. These things make the development of custom code, especially for scientific, computational, or instrument-interactive purposes, a lot easier than on Windows.
Even if you're not using custom code, I'd venture to guess that absolutely *any* computing cluster worth its salt is being run on some kind of Linux. If you want to just use that cluster, it's far less hassle to SSH in with a Mac or Linux computer with SSH built-in, than having to use something like Putty and fiddle around on Windows.
In all, science is very heavily dependent on computing. And high-level computing is very dependent on Linux. In my view this is the biggest reason that very compatible Unix-like systems like Mac and Linux are far more prevalent.
Upvotes: 8 [selected_answer]<issue_comment>username_2: >
> In the commercial world, Windows utterly dominates.
>
>
>
If you're referring to researchers in a commercial setting - that's really not the case, or at least - you don't have any evidence to support this. In my experience, research staff at commercial outfits also tends to prefer Unix-based operating systems.
I would say the reason is that these systems are a lot more inviting and amenable to software development, this has at least the following consequences:
* A lot more variety of niche applications, libraries and tools which researchers need to use or just find useful.
* More amenable to customization, modification and adaptation - which researchers are more likely to want to do.
* Easier for a researcher him/herself to code something.
Also, GNU/Linux is free and you can just copy it - and (almost) all software you can get on Linux; with Windows, not only do these things cost money - in an organizational setting at least - but they require the hassle of managing licenses. Ugh.
>
> I don't have similar statistics for academia, but my personal observation is that most professors seem to prefer Macs, especially on personal devices.
>
>
>
Maybe that's true in the US, and even there I wouldn't be sure. Anyway, I would speculate that is more of a combination of vanity/fashionability, and the fact that Macs have very good hardware and well-put-together desktop environments with few bugs and usability issues. But - they are also quite expensive, so it's more likely for richer people to have them.
Upvotes: 4 <issue_comment>username_3: It's not just Linux and macOS, it's UNIX (and derivatives) in general. There are a lot of reasons, but a few things worth noting that might not have been covered yet:
* [BSD](https://en.wikipedia.org/wiki/Berkeley_Software_Distribution) (incl. the [BSD Sockets API](https://en.wikipedia.org/wiki/Berkeley_sockets)), was originally developed at [University of California, Berkeley](https://en.wikipedia.org/wiki/University_of_California,_Berkeley). BSD subsystems are [utilized by macOS](https://developer.apple.com/library/archive/documentation/Darwin/Conceptual/KernelProgramming/BSD/BSD.html), Darwin, FreeBSD, OpenBSD, et al.
* [GNU](https://en.wikipedia.org/wiki/GNU) (typically utilized by most Linux systems) is also heavily rooted in academia. Founder [<NAME>](https://en.wikipedia.org/wiki/Richard_Stallman) has something like 20 honorary doctorates and professorships.
* [Mach](https://en.wikipedia.org/wiki/Mach_(kernel)) (utilized by macOS, iOS, XNU, Darwin, GNU Mach, Hurd, et al.) was originally developed at [Carnegie Mellon University](https://en.wikipedia.org/wiki/Carnegie_Mellon_University).
* Non-proprietary, free and open-source systems, software, projects, etc. (like Linux) aren't funded by high volume sales (like Microsoft Windows), so their development largely depends on contributions of knowledge and labour from postgrad research teams, and the academic & intellectual communities at large.
* These kinds of operating systems are particularly well suited for [distributed](https://en.wikipedia.org/wiki/Distributed_computing) & [parallel](https://en.wikipedia.org/wiki/Parallel_computing) computing, [supercomputing](https://en.wikipedia.org/wiki/Supercomputer), rapid development, prototyping, experimentation, [computer science](https://en.wikipedia.org/wiki/Computer_science), and science & engineering in general. When people accustomed to well-funded, proprietary products like macOS and Windows10 (i.e. most people) try a Linux distribution for the first time, often their first impression is that it's ugly, unpolished, even "un-finished" or incomplete — like it's still in development.
And they're not wrong. It's constantly being developed, and kind of open-ended; you can add or remove anything you don't need or want. You can adapt it to your needs or to suit any specific purpose, and you're not stuck managing a giant, stubborn monolith that forces you to adapt to it, instead of the other way around. It's kind of like leasing a mansion vs owning & living in a workshop; they're suited to different purposes.
Academics often choose macOS when they're looking for a bit of a compromise between the dichotomy. It's nice and clean, and simple, and easy, and uniform, and respectable, so you can bring your mom or your girlfriend or your grandma, but you can peel back the shiny stuff anytime you want to get back to work in that familiar UNIX-style environment.
Upvotes: 5 <issue_comment>username_4: Windows reboots itself without permission. This is very inconvenient if your computer is doing something at the time. Mac/Linux do not have this problem, so they are preferred.
Some specialised hardware only comes with Windows drivers. Hence Windows is required. Occasionaly hardware requires a different operating system.
In practice I use all three.
Upvotes: 4 <issue_comment>username_5: In my experience, younger people go for flashy, fashionable, stuff. Macs certainly fit the bill there, and Apple marketing specifically on that, turning their products into fashion items, does a lot for that.
Older people in sysadmin jobs often are almost religiously fanatical about their support for Linux. This they picked up 15-20 years ago during the operating system wars, and has little basis in reality but it sticks to this date. That's not to say there are good reasons to use Linux on occasion, but it's certainly not the one size fits all perfect solution these people think and claim (and push on those who depend on them for their computer hardware and software support). This causes a prevalence of Linux workstations in places where these people end up in the IT departments, which happens to be more likely universities than companies by now as most companies prefer more pragmatic employees.
Personally, I use both Macs, Windows machines, and Linux machines, the latter mostly (almost exclusively) as servers. In the server world transitioning from HP, Sun, and IBM (among other) Unix platforms to Linux based ones is a major cost saving while retaining a Unix style platform, easing transition. Which is the original reason most servers now run Linux. On workstations, it's a much more personal preference thing (myself, I consider Linux to be clunky still on the desktop, though less so than in the past) and there are religious fanatics supporting different operating systems. In academia those gravitate towards Linux, in the graphics industry towards Macs, and what few there are left in industry usually towards Windows (after the demise of OS/2, which had its own fanatical fanbase in the 1990s).
In the corporate world, many companies default to Windows because it's easy to use for the end user and Microsoft has a pretty good support infrastructure, giving you a single place for all your problems if you can't solve them yourself.
Upvotes: 3 <issue_comment>username_6: I am a Mac user at client side but my scripts run on Linux based server. Because the way of work is similar I don’t need to learn double logic. I try to avoid Windows as much as possible.
My main reasons for Linux:
- free, available to anyone at any time; this enables free research, free science, free development, free sharing
- many free and open source projects shared by universities
- open, yes - you can adjust or create your own Linux if you need something special, or find out how things are made if something doesn’t work as expected
- scalable, secure, social
My main reason for Macs:
- user friendly (for Mac users haha)
- hardware and operating system tuned to work together which results in more effectivity, efficiency and less failure; many startups and edu decide for Macs because in total it is less expensive than Windows based systems in a period of 3-5 years even the purchasing cost is higher
- easy to port linux stuff to MacOS and get benefits from Linux
- secure, stable, beautiful
Upvotes: 2 <issue_comment>username_7: I am an academic and have more than 10 years of experience in the academia, including my doctoral studies.
I use Windows as the main operating system because most programs are made for Windows and because of subjective preferences.
Within Windows, I use a Linux virtual machine (VM) to run scientific software that is only or mainly developed to run on Linux. Linux also has a powerful command line and scripting, with which amazing things can be done. Additionally, many computing clusters run Linux, and they are useful for large calculations.
I used Mac in the past but now don't see any advantages in doing so over my current combined Windows/Linux setup.
Upvotes: 3 <issue_comment>username_8: While @tjt263's answer sums up most reasons I specifically want to point out that this seems to be the case in STEM related fields. I work in a STEM working group as well and this is definitely the case here. In my brothers group, which is civil engineering, they all use windows. My girlfriend works in a group related to humanity science and they all use windows.
The biggest reason for Linux I observed is: It is just simpler. My group is related to machine learning, deep learning, computer vision, etc...sometimes it seems almost everyone in this field is producing "throw away code". Just publish a paper and that's it for the code. Since there is no end product involved, it does not matter if it runs only on this machine or more general, only on linux. So most people just use Linux because the libraries they need can be installed via simple command in the command line, they do not even care(or know) where it is installed...it just works.
Also you have to keep in mind that Linux is free while Windows is not.
Upvotes: 3 <issue_comment>username_9: I would characterize the ostensible preference for Windows outside of academia as mainly **inertia.**
In corporate environments, the choice of OS is often dictated by application compatibility. If you have a corporate intranet which was designed to only work on Internet Explorer, you are stuck on Windows for practical reasons -- replacing it would be an expensive undertaking and bring little value, so you just live with the consequences of that previous decision.
Of course, that's just an example -- more common is probably the use of an array of enterprise applications which were designed for Windows, or work better on Windows.
Add to this the cost of hardware and compatibility concerns. From a cost-of-ownership perspective, the availability of commodity hardware *with supported drivers* is still a strong selling point for Windows, coupled with the fact that this is what you can expect current and future employees to be comfortable with.
Microsoft has some propaganda which attempts to argue that the TCO (Total Cost of Ownership) is favorable on Windows, but I find these claims dubious at best; and certainly, if you factor in user satisfaction, many users who have a say in how their hardware budget is spent definitely swing the other way.
Ultimately, for many users, the deciding factor is whether they need strict Office compatibility or not in their day-to-day work. If your organization heavily uses one or more of the Microsoft so-called productivity apps, the free offerings are often unacceptable, even if they nominally manage to open and save files in these formats.
Where these popular applications are not crucial, it seems that you can be more productive if you can avoid them.
In academia, then, central use cases outside of proper research are article composition and review (where typically popular word processors are less than ideal, and some disciplines heavily favor e.g. TeX), email, and web-oriented activities, where fortunately free and open source offerings are generally roughly on par with commercial alternatives. Whether you then prefer (or can afford) Mac or Linux is down to personal preference and budget.
Finally, of course, you might have an important research application which happens to be primarily targeted for Linux platforms (or occasionally, as the case may be, Mac, though this is probably no longer very common).
Upvotes: 3 <issue_comment>username_10: There is a lot of talk here on software availability. Let's talk about software developing. The whole thing is *quite* multifaceted though.
While Windows has a quite dominant share in applications, for a long time it was harder / more costly to start writing software for Windows. Nowadays the cost / availability factor is quite relaxed. However, some architectural obstacles remain. It is just easier to obtain and use (open source) libraries under a UNIX-like system, which both Mac OS X and Linux are.
Linux strives from the *controllability* of everything, the typical young adult rebellion, and from experienced people on the server side. Macs have the image of "just working", even if it's deteriorating. However, from the developer's view it's much easier to make a typical programmer's workhorse from a Mac than from a Windows machine. There are not many more extra steps than with Linux.
With Macs, the looks, the availability of drivers (like, 10 years ago there was a real driver hunt under Linux, if you wanted to support, say, both sound and suspend mode in your laptop), combined with relative ease of programming usage speak for them.
Of course, there *are* some special software products that work only with Windows or even rather on Linux and nowhere else. So, some times you need to adapt. This seems to be rather field-specific.
Next point: Apple has screwed up many things, but the beamer presentation works flawlessly–if you still have your adapter. I have seen many academics struggling with their Windows and Linux laptops at conferences.
Surprisingly, cost is a lesser factor. But if you compare a sanely configured Mac with a proper business notebook (say, by Dell), the difference is low.
A further point: telemetry and better CUDA run times under Linux. I know multiple people (including myself) who switched to Linux on their workstations in the middle of Windows 10 heist.
---
**tl;dr:** Programmer often prefer UNIX-like systems, performance, eye candy, proper image, and similar prices for durable hardware round up the deal.
Upvotes: 2
|
2019/05/15
| 245
| 1,060
|
<issue_start>username_0: It has been a month since my paper was accepted and published. The paper is not yet assigned to any issue in the journal, but it is published and available via springer.com. The paper is in the Google Scholar list of papers for the corresponding author, but it is not yet shown in my list of papers. When I search for it, no result is returned
What is the problem?<issue_comment>username_1: Google scholar takes quite some time to update the database. Anyway, you can always add your articles manually.
Upvotes: 1 <issue_comment>username_2: Google Scholar's automated indexing algorithm makes mistakes. It can overlook some of your papers and it may also list some papers in your profile that are not yours.
To add a paper, press the "+" button and select "Add articles" to add articles off a list of possible matches Google found. Or press "Add article manually" to enter title, authors, journal, etc., manually.
To delete a paper that's not yours, click the checkbox beside it and then click "Delete".
Upvotes: 3 [selected_answer]
|
2019/05/15
| 3,410
| 14,533
|
<issue_start>username_0: I am considering applying to CS PhD programs at some of the top research universities in the United States.
While I cannot speak for other fields, I know that in my field (artificial intelligence), there has been a major push to increase the diversity of practitioners in the field. Here are a few articles about this diversity initiative:
1. Article about lack of diversity in the field of AI:
<https://www.technologyreview.com/s/610192/were-in-a-diversity-crisis-black-in-ais-founder-on-whats-poisoning-the-algorithms-in-our>
2. Tweet thread by CS admissions chair at Cornell discussing diversity initiatives: <https://twitter.com/davidbindel/status/992165303747461122?lang=en>
3. Tweet thread by PhD candidate at University of Toronto (a top school in AI) clarifying the importance of diversity in the field: <https://twitter.com/leeclemnet/status/1040030107887435776>
There are many other initiatives such as conferences at NeurIPS (typically regarded as one of the top 3 conferences in AI) that has workshops exclusively for members of underrepresented groups. I'm not sure what exactly the workshops entail, but here are the links to the relevant organizations: [Black in AI](https://blackinai.github.io/), [Women in Machine Learning](https://wimlworkshop.org/), [LatinX in AI](https://www.latinxinai.org/).
The impression I get is that a lot of the talk around diversity tends to revolve around some key physical identifiers such as ethnicity and sex/gender; although perhaps I have not looked broadly enough at the full scope of diversity initiatives in the field and there may be initiatives that revolve around things like socioeconomic background, geographic background and national origin.
I can understand that when individuals of a particular ethnicity (such as African-Americans) and individuals of particular sex (such as women) are particularly underrepresented in the higher echelons of the field, people begin asking questions as to why that is; since, after all, ethnicity and sex should not be predictors of success in the field yet they clearly are.
**Question**
I'm a student studying at a university that is ranked in the top-10 globally on both the Times Higher Education and QS rankings, and am planning on submitting some PhD (CS) applications to some of the top unis in the US in the upcoming fall. Given the preoccupation with diversity by practitioners in my field (in both industry and academia), I'd like to know to what extent my ethnicity will factor into the admissions decisions of committees in the US.<issue_comment>username_1: Most (not all) US universities are in favor of diversity in the student body. However, this view is highly controversial in some circles and is actively attacked in the courts.
However, the purpose of the US notion of Affirmative Action was to try to ameliorate the effects of poor schools provided to minority and poor students along with active discrimination faced by such students as well as that faced by women in many fields. As such the rules are intended to benefit those who have grown up in the US but have been disadvantaged in many ways along their path to higher education.
The dilemma is that we have a belief in this country that everyone should be treated equally and haven't really found universally accepted ways to account for the fact that it isn't a reality.
But if you are a citizen of an African country rather than an American of (partial) African descent, then the rules, such as they still exist, won't really apply to you. The effect at the doctoral level is small in any case. The desire for a diverse student body is balanced by the desire to choose the "best" (most prepared) candidates. So, don't expect any strong bias in your favor. Hopefully you won't find any against you.
My advice is to assume that there will be no effect at all and to stress your qualification in any applications. Why can you be expected to be a success in your future studies? What is it in your knowledge and work ethic that makes you an excellent candidate? Every student needs to make that case of course. I think you are safest to assume it is the same for yourself.
I'll note that diversity of students based on their country of citizenship is also valuable to a university, as it often represents a diversity of viewpoint based on history and different educational systems. Universities do tend to value that when all else is equal (even if approximately). This can be especially valuable when questions of ethics (and similar) arise, even in a STEM field.
So, a big part of the desire for diversity in US colleges and universities is a desire for *diversity of viewpoint*, not just an ethnic (or gender) diversity. It might be hard to untangle the two in practice, of course.
Upvotes: 2 <issue_comment>username_2: If there is one thing that academia is particularly bad at, it is *transparency in decision making*. Why do some people get admitted and not others? Why are some people hired and not others? Why are some people promoted and not others? From the bottom to the top of academic hierarchy, you'll be hard-pressed to ever get a straight answer, and answers you do get are always suspect because the nature of human decision-making is itself complex and opaque. There are plenty of reasons for this, from self-protection from criticism and gaming of the system, to rapidly changing criteria based on past experience and argument, to the fact that so much of the decision making is based on subjective ratings provided by individuals (often down to individual faculty members deciding if they want to work with a student).
To cut to the heart of your question, while it is not any sort of prescribed standard at the graduate level, you can safely assume race, ethnicity, national origin, and gender will be in some way be "taken into account" by some committees and individual professors. In many states this is actually illegal, explicitly forbidden by law - you can still assume it will sometimes happen anyway. On the other hand, some people will simply ignore demographic factors to the extent they are physically able, regardless of what they are told to do. As Universities have a history of avoiding transparency in admissions, you are extremely unlikely to ever know when it was considered, how much effect it had, whether it was positive or negative, and whether it ended up materially effecting the decision to admit you or not. You cannot know, they will basically never tell you, they will tend to go out of their way to avoid writing it down, and it is common for many people in the process to not even know with any certainty what factors ultimately "mattered".
However, from what I've seen on the inside and the outside of academia and from talking to fellow academics, you can safely assume at the graduate level that other factors matter way, way more. It is simply too high-stakes a decision for most schools to be altruistic, and they tend to steadfastly refuse to intentionally admit people who are utterly unqualified simply because of some demographic fact about them. The vast majority of Universities will only admit you if they think you will do well there, because having a bunch of under-performing students who are radically unqualified to pursue research and teaching is an albatross around the neck that few graduate schools will willingly tolerate. This is even more true at the PhD level, because they aren't cashing in on tuition (which is heavily marked up for international students, because money), so performance is king.
From talking to people working on diversity initiatives, I'll also give you another important insight: the diversity initiatives you hear about are not actually a preoccupation of the majority of faculty and staff, for good or ill. According to people working on actually making these initiatives successful, they report that the vast majority of people mostly want to ignore the issue and focus on other things, and so they work so very hard to make the initiatives public and heavily seen because otherwise they feel they will be unable to make any changes and they will be crushed by the majority weight of indifference mixed in with a heavy dose of active hostility to their efforts. Higher ed can look monolithic from the outside - but on the inside it is highly factious, and what appears as consensus is often a thin facade.
Because of all the above, the only advice I think makes any sense is treat the process as any other student should: apply broadly, see what offers you get, and talk with prospective advisors about what they are interested in. You are welcome to ask them if they were involved in your admission decision and then ask them why they wanted to admit you; I did that, and while it can feel a little awkward at first, a little bit of humor makes it easier and most professors when questioned directly, sincerely, and politely gave very direct answers. Some even said what parts of my profile they thought were a concern, and what other parts they thought would be a strength. Everyone gave wildly different answers, there was no consistency - no two people see the same person the same way.
Once you've got all your offers and finished talking with people, you can make your decision then on what seems like the best opportunity for you.
Upvotes: 6 [selected_answer]<issue_comment>username_3: In addition to underrepresented racial and gender groups, some other diversity goals that many universities often care about are geographic diversity (not everyone coming from NY and CA) and admitting more first-generation college students.
I want to first echo what everyone else said, which is that schools don't admit people who they don't think will succeed, and if you're admitted they think you're a good fit. Second, the idea that if you put race and gender considerations aside then the default would be some kind of "pure meritocracy" is a fiction. There's a huge amount of randomness in students life experience and opportunities, in how good your letter writers are in writing letters, in who happened to be in the admissions committee that year and which letter writers they know. Schools also might want to balance between fields, and so if AI happens to get too many applicants at some school some year it might be harder for you than for someone in a different subfield. (And at the undergraduate level, there's a huge advantage given to legacies and athletes in obscure sports, both of which skew very rich and white.) At the graduate level probably the biggest of all, is there's a huge advantage given at state schools to American citizens and permanent residents (whose tuition waivers are significantly cheaper for the department to pay). If you're not a permanent resident, the disadvantage that causes is going to wildly outweigh any advantages that diversity considerations could possibly give.
Finally, I want to say that there are lots of simple practical reasons that diversity in hiring and admissions is practically important for departments. One of the most important ways that we are evaluated by the school is in attracting and retaining majors in our degree. On the whole, students are less likely to pick a major if they don't see any faculty or TFs with similar backgrounds to them. The African-American faculty I know are inundated with students who want to meet with them so that they can talk to a faculty person who understands their experience. Departments who don't have enough women faculty or don't have any African-American faculty are unable to do as good of a job of meeting the practical needs of their students.
Upvotes: 3 <issue_comment>username_4: I'll answer as far as how I perceived this working in the PhD program I graduated from, which was not in CS/AI.
A) No one was admitted who was unlikely to succeed in the program. No one was admitted who didn't belong here on merit in order to fill some sort of quota. **No token admissions.**
B) In every applying class, there were more applicants that the program wanted to admit than the program had funding to accommodate. Some of those students could be supported directly by a lab and join even if the program was out of money, but they would not have a year to rotate and test out different labs for a good fit.
C) Some additional funding was available to students from under-represented groups, however, only available to US citizens/permanent residents. Funding sources for international students were more limited, and they typically came either with funding from their home country or joined a lab directly that funded them.
---
In summary, the main way that ethnicity could matter was by funding for a student to join the program as a fellow/trainee rather than directly funded by a lab, which would give an opportunity to rotate in labs for a year that they might not have had otherwise. This was only even relevant for students who were already desired by the program on merit. As an international student this wouldn't have applied to you. Other programs may have different approaches that apply to international as well as domestic students.
Upvotes: 3 <issue_comment>username_5: I agree with username_2's answer, but wanted to add one thing. Everyone else seems to be assuming that if there is any bias in the admissions process, it would be *in favor* of underrepresented applicants.
This is the opposite of the conventional wisdom. In fact, it's well-documented in certain settings, e.g. applying to a job in the US, that when people make decisions about who is "qualified" or not, they are likely to be biased *against* women and underrepresented minorities. (This bias is often shown just based on the applicant's name, or other biographic details -- not a picture.)
I don't know to what extent this subconscious bias applies to academics at top research universities, but from personal experience I am guessing it really does, for some professors. You can safely assume:
1. **Some professors reading your application will show subconscious bias *against* your application** based on your name, country of origin, or other demographic details (disfavoring women and URRM applicants)
2. **Some other professors reading your application may consciously favor your application** because they wish to increase diversity in the department, and admit more URRM or women applicants.
3. **Either way, it is unlikely to be the biggest factor in the decision** (see username_2's answer).
Upvotes: 3
|
2019/05/15
| 1,696
| 7,308
|
<issue_start>username_0: I am interviewing for a part-time (50%) program-lead/lecturer position at a university in Germany. The position is to set up and lecture on a new master's degree program. I will be finding more out at the interview, but I suspect that very little groundwork has been laid yet and the job of this position for the next year will be to create the program basically from scratch and recruit a cohort of students and teach at least one class on it when it starts in 2020.
I like the idea of the position, but I think that it is probably not really a 50% time position (i.e. it will take more than 20 hours a week to do this job). I have already asked the professor that listed the position about the chances of it increasing to 100% time, which he said would only happen if there was external funding but he also noted that I would have to negotiate my step on the pay scale with HR directly. This implies to me that any negotiations about the position will be with HR.
Anyone familiar with German academic postings, what is the culture around negotiating positions? Obviously I've applied to the job, but I also know that I think this is a job that will need more than 20 hours a week to do. I would prefer to negotiate to at least 66% time, but alternatively I would accept the position so long as I was placed at step 3 or 4 on the scale (which I've read is quite hard to get done).
Does anyone have any thoughts on how HR will respond to asking to increase the part-time position from 50% to 66%?<issue_comment>username_1: First of all, you might want to remove and rewrite your comment a bit, when I google the phrasing I can find the exact job offer even though you removed the names. But you are right, it is indeed a good "proper" university, even one of the better known ones. (Which I guess in Germany counts much more than any specific ranking)
I am not sure if this is a satisfactory answer or just an overly long comment, but since nobody else gave an answer yet and I already asked for clarification, I'll write down my thoughts anyway. Some of my answer might not fit since I have only experience with German universities in the STEM field, where money might sit less tight than in your area. Personally I find their offer insultingly low. I have been paid more as a lowly PhD-Student with far less responsibilities and in a place with a lower cost of living (Have a look at the rent offers before you decide, the place you are going is a beautiful city, but definitely on the expensive side).
The short answer is, you can try, but I am not sure if you will achieve much. The problem is due to how the budget is generally handled in Germany, which is usually in multiples of positions. Basically whoever is hiring you (in this case the professor or his department) has a certain (not necessarily whole) number of positions and decides, how much of that they are giving you. Then they are sending the contract off to HR which then determine at what level you are set (which only depends on how many years you have worked in a similar position and they indeed sometimes have an extremely narrow definition of similar). Neither side can really influence what the other decides, nor mostly do they care.
So to increase the percentage you need to ask the professor, who apparently already said no, presumably since he/she is out of positions. If they really want to hire you, they might divert more from elsewhere, but be aware that that might come with added responsibilities. Apart from that you could try asking for money from less tight budgets such as increased travel money. And of course you can apply for grants and external funding, but that generally takes a while.
Upvotes: 1 <issue_comment>username_2: Given that the position as you describe in the comments "offers the possibility to enroll in [their] PhD program or to complete a habilitation", this should be interpreted as a typical **qualification position** in the German academic system. The concept of these positions is that you are an employee of the university, with some tasks in teaching, administration, and/or research, but you get time to work on your next qualification in an academic career. There is a dedicated federal law (informal name "Wissenschaftszeitvertragsgesetz", officially "Gesetz über befristete Arbeitsverträge in der Wissenschaft") that governs these positions.
In some fields (yes, usually the less well funded ones), it is quite common that only the "tasks" part is funded, but you have to do the "qualification" part unpaid, especially when working towards a PhD. Nowadays there is indeed a recommendation to fund all such positions for least at 65%, but given budget constraints etc. you can not always count on that. Nevertheless, with this type of position, the university needs to make sure that you have sufficient time to work on your qualification, so in principle the setup should be such that you need not more than the ~ 20 h / week to do the administrative / teaching part. Depending on the culture of the research team you're joining, you may still be expected to spend most or all of the "qualification time" there, so things may get a bit hard to keep apart.
The HR department will **not** negotiate with you about the part-time ratio nor about the salary group (such as E13 or E14) for this position. That is decided by the person that is filling the position, taking their budgetary constraints into account.
What the HR department will indeed do is determine the salary level (step 1, 2, 3, ...) that you are rated at. However, step 2 typically already requires at least some years of experience in a very similar position, and step 3 or 4 is indeed next to impossible, as you already figured out.
My recommendation would be to only take this type of position if you are more interested in the qualification part than in the teaching / administrative part. In the specific case you describe here it worries me a bit that the qualification part seems to be hardly described in the posting, which may indicate that the university "abuses" a qualification position to get some core teaching / administrative tasks done cheaply. At the interview, definitely ask about the research part, i.e., what topics you could work on, or come with some ideas of your own, and if they don't seem interested in that it's a big warning sign.
If this is for doing a PhD and you're in principle qualified to do that in Germany, then it should not be too difficult to find a position where the funded part is a small teaching support role or even working on a funded research project that can be connected to your PhD topic. Any of these would be much more preferable for doing a PhD than the type of position you describe here.
Upvotes: 3 [selected_answer]<issue_comment>username_3: From your description I suspect that the university received extra funding for just half a (probably E13) position. That means HR cannot increase that; the money just isn't there. Apparently one of the profs is willing to look for additional funding, which is your best bet for increasing the hours. However, additional funding also means additional tasks. With HR you can probably only negotiate minor steps in your pay scale. That is important, but is not your question.
Upvotes: 2
|
2019/05/15
| 7,516
| 32,348
|
<issue_start>username_0: The issue has crept up on me slowly over the last several years. I am increasingly aware of the massive debt that many of my students are taking on, debt which is far beyond the sort of debt that I incurred as an undergraduate in the 1980s. Because of this, in recent semesters I have found it somewhat difficult to fail students. Instead of simply asking myself "does this student deserve to fail this class?", I find myself asking "does this student deserve to have their life ruined?" In many cases (e.g. students who are already on academic probation) this is not much of an exaggeration. It is a very bad situation to find yourself in your early 20s with no college degree but $30,000 in debt. In some cases, I am aware that a decision of mine might be a contributing cause of a student ending up in just such a situation. I can no longer regard a failing grade as a relatively minor matter (like a speeding ticket).
How do professors reconcile their de jure role as guardians of academic integrity with their de facto role of being (at least in part) responsible for their students' economic future?<issue_comment>username_1: Ultimately you aren't responsible for the behavior of your students nor for their bad decisions. You aren't responsible, either, for how they react to a failure. For some students, as I have seen, a failure can be a wake-up that gets them onto a better path.
You certainly aren't responsible for the terrible way that we finance higher education in the US as long as you are willing to pay taxes for the common good.
I'm assuming, of course, that you are responsive to their needs and that you try to do what you can to help them before the failure occurs, but sometimes you just have to call it what it is. It may help them change majors. It may help them find a career path that they would enjoy more. Lots of things are possible, but all outside your control.
But when you do fail students it is helpful, when possible, to advise them about their options. Simply continuing on without some change in behavior or attitude is likely to just get them deeper into debt, both educationally and financially.
Be honest, but be helpful.
I'll also note that it is possible to design a system in which it is hard to fail for a student willing to work. For me this meant the possibility of a student repeating work for a better grade. Grades weren't given as gifts, but on demonstration that the important lessons were actually learned, even if not at the first trial.
Upvotes: 6 <issue_comment>username_2: **You are responsible for teaching the students to the best of your ability, and to judge their capacities to use what they have learned**. That judgment is made based on their grades. So you have several things to think about here.
1. **Are you teaching the best you can?** Teaching does not mean "downloading facts", as I'm sure you're aware. It means "transferring knowledge, skills, and attitudes". That "transfer" part is the important bit — transfer means that the student is able to reproduce and use what they've learned. Is your teaching enhancing this transfer? This is a tough nut to crack — how do you know? Are you planning your assessments so that you can really tease out the nuances and to see which students really understand, or are they just assessments because you need to assign a grade somehow? Your institution might have a center for teaching how to teach, and if you feel that you aren't teaching your best class then start there. Otherwise, lots of books and resources exist, which I'm sure we can all provide.
2. **Are you assessing fairly?** Fairly doesn't mean easily. It means that you are creating assessments that actually test understanding and that a student with reasonable ability will be able to succeed at. It also means to understand their context. It's easy to make a "really good" assessment that everyone fails because they also have three projects and two midterms in their other courses. Are your expectations clearly communicated, and are you ensuring that you only assess what you've asked for? (that doesn't mean that you can't expect students to go above and beyond, just that you need to tell them you expect them to)
3. **Are you assessing accurately?** I'm distinguishing this from "fair", but you can treat "fair" and "accurate" as two sides of the same coin. Accurate means that your assessments are set up so that appropriate weight is given to appropriate topics, and that your tests actually enable students to display their understanding and capacities, rather than whether they memorized the example or found the answer on stack exchange. Creating fair assessments is challenging, but there is a lot of research and resources available.
4. **Are you giving every student the chance to seek help?** I often find that if students are slipping through the cracks, setting up a regular meeting with them to keep them on track can do wonders. However, I am in a job in which I'm required to work with students like this, so it's easy for me to do. If you are a busy research professor who is teaching two courses per semester while juggling other things, it's a lot harder. Ultimately, the final exam is not when a student should find out they failed the course. They should know that they are on a bad path long before then, and should have opportunities to get on track.
If you are doing these things, then **you** are not causing them financial ruin. It's similarly not fair to say that the students are causing this — you don't know their context and can't make the judgment. Perhaps they went to a bad high school that just didn't prepare them, or perhaps they are always on the train to another city because their parents are sick and they can't attend classes. It is not your responsibility to help them in this way unless you are capable of providing everyone the same help. Which brings me to the most unfortunate reality of post-secondary education:
**Not everyone can make it**. For whatever reason, some students simply will not demonstrate that their abilities are up to the standard that has been set. Notice the wording I used there — I didn't say that they don't have those abilities, but that they will not **demonstrate** that they have those abilities. Provided you are assessing them fairly/accurately, teaching the best you can, and giving the help they pay for, then you are providing them with every opportunity to demonstrate those abilities. If they are unable to do so, then it would be unethical to let them pass regardless of the reason.
Upvotes: 8 [selected_answer]<issue_comment>username_3: Take the example of a medical student. Do you want to pass someone who does not have the necessary knowledge to treat patients correctly? It is your duty to make sure that only the ones who know what they are doing will pass. This may be less strict in other subjects but the principle is the same.
--- EDIT ---
Another example where this becomes clear would be an airplane engineer or pilot that does not have the necessary knowledge (thanks to Mike's comment below!).
Upvotes: 7 <issue_comment>username_4: You're not failing the students. Assuming you performed your teaching job well, and you're grading them fairly, the students are failing themselves. They didn't study well enough to pass the class, or maybe they just don't have a talent for this material. Giving someone a passing grade when they haven't earned it is not fair to them, and it dilutes the value of passing grades for all the other students. If they need to use what you're teaching them in their career, they're not going to be as successful. The passing grade you gave them doesn't actually make them competent.
When you go to college, you're not buying a degree. No matter how much debt you get yourself into, you're not entitled to the diploma, you still have to do the work and pass the classes. And as a teacher, it's your responsibility to make sure they've done that before passing them.
Upvotes: 5 <issue_comment>username_5: Your compassion is admirable but is not unique. Other answers have provided more detailed answers about grading, content prep and presentation, students 'earning' their grade and that is all true.
Responsibility vs Accountability
--------------------------------
With my answer I want to add to the discussion the point that the economic situation of a student may seem dire it is still the students responsibility and not yours. Them making you aware of their situation (even innocently) can have the effect of shifting that responsibility unfairly to one who is not accountable for it (you).
It's Not Life or Death, Until It Is
-----------------------------------
When I was in college there was a professor who would comment about students complaining about difficult tests or mounds of homework by recalling how different it was in the 1970's when a male student failing out of college made them eligible for the draft with a likely outcome of being sent to Vietnam; which carried with it a very real risk to their health or lifespan. He wasn't saying that he failed people in order to send them to be drafted, he was saying the motivation of students facing such risks were much higher in those days. It wasn't him that decided who was born male, who would be drafted, who would be given a gun and sent out into the jungle.
Guilt, party of 2
-----------------
Did the student confer with you before choosing their major? Did they ask your opinion about which school to attend? Did they check with you before taking out those loans? You are neither responsible nor accountable for their choices.
Advice
------
While I would not ask you to stop feeling compassion, you should not allow their predicament to turn into guilt on your part. Should you turn your back on these students? Absolutely not, but what then?
Either add something in your syllabus or to your first lecture something to effect of, "I know some of you may be relying on financial aid (loans, scholarships) which carry an eligibility component which may be affected by how well you do in this class. If that applies to you, I would suggest you get with me early in the semester so I can assist however I can to point you to resources to improve your chances of success in this class."
To effect, you are saying "I can point you to water but I can't make you drink". That is the most that can be reasonably be asked of you since you can't do the work for them or make them attend class, etc.
Upvotes: 3 <issue_comment>username_6: There are two sides of this issue, the way I see it.
Firstly, from your personal point of view, I do not think that you should bear this, indeed, extremely high, responsibility of choosing between academic integrity and economic ruin of your students. But, as you have identified it, this is happening, and I believe this is something you should do something about, as it is directly affecting your life in ways it shouldn't. If I think the government is not properly regulating a field, say food standards, then I complain about it, I vote more carefully, I may become an activist of the matter, or ultimately maybe even start doing politics with the goal of improving those regulations. And that's what I believe you should do.
Secondly, as a professor, you are the leading elite of the society. This privileged position comes with a heavy responsibility for leading the entire society. A responsibility you voluntarily accepted when you chose your career path. Of course you share this responsibility with the entire society, but as a shaper of the future generation, your share is simply larger than the share of most people of the society.
You are not just the de jure guardian of academic integrity, you are a model, you are a leader, you are the shaper, of society. From the weight of your position, I find it your duty to always consider all your roles with every decision. Decide in such a manner that you can peacefully place your head on the pillow every night, knowing that you did right by the people who offered you this privilege. I can't, and I believe nobody can, tell you which side you should error on. Your decisions are extremely personal, they make you who you are, and each situation is different and may grant a different approach.
I would like to take the opportunity to comment a bit on what seems to be the common thinking:
* *as long as you teach well, it's not your problem, they are failing themselves by taking too much debt and not studying hard enough. You are paid to guard academic integrity (e.g. giving grades) and not to empathize with other people*
The way I see it, this approach just ditches your responsibility. It even sounds as an excuse and that's because it is the kind of excuse other guardians were doing, at great moral costs, but in exchange of significant privileges.
* *what if the student will become a doctor and later kill people*
This is simply just one particular case for which you have to weigh all the factors before taking a good decision. I would say the risk of the student killing other people in the future is almost always something that should tip the scale towards failing the student. Still, even in this situation, if we consider the extreme case that I would know a student would be drafted if I would fail them, which would lead to an almost certain death, I would just let them pass, even at significant costs to me, because I am staunch opposer of the death penalty.
Ultimately, just because you sought and received the privilege and responsibility of a professor, it doesn't mean it has to be like that for your entire life. You are free to pursue your own happiness, even if that means giving up the responsibility and the associated privileges, but just judging by the question you asked here... that would be a pitty.
In conclusion, being in your situation is hard, it will never be easy, but it really doesn't have to be this hard and it's up to you to change it.
Upvotes: 2 <issue_comment>username_7: Something other answers did not yet address - by giving unfairly good grades to undeserving students, ***you are dramatically, and unfairly, penalizing good students***.
Students - in large part - choose to attend a university based on its academic reputation. Perhaps, paying premium.
If your lax grading methods graduate unfit students instead of failing them, this will materially affect the successful student's reputations as graduates of your school, since the employers or graduate schools would have no way of knowing if someone graduated your program because they were a good student, OR, because you took pity on them. So, everyone will be tarred with the bad reputation and that would negatively affect their lives and careers.
Upvotes: 4 <issue_comment>username_8: The other answers are great, but I want to stress one thing not really mentioned in other answers: **if you are concerned, talk to them before they fail**. Don't single them out for help because again it won't be fair, but still: tell them explicitly that if they continue performing as they have, they are going to fail the class. Point them to the various resources that are available (e.g. undergraduate tutors, office hours). If they still fail, as the proverb goes, you can lead a horse to water, but you can't make it drink.
Upvotes: 3 <issue_comment>username_9: By giving students grades they didn't earn, don't deserve, you devalue the entire educational institution and the titles and diplomas it hands out.
These students, now with titles and diplomas they should not have had for whatever reason by any standard, enter the marketplace and get jobs based on those titles and diplomas.
They WILL fail in those jobs, hopefully before millions upon millions of dollars are lost, or worse lives.
When that happens, their coworkers and employers will start to question whether the degree they presented when applying and brag about over lunch is really worth anything at all. That may well lead to that company no longer hiring from the university that handed out that degree.
I've seen it happen (though not with a university, this was a post-grad certification system for professionals). The job performance of employees with certification from a specific company was overall so bad that the company I worked with started refusing to hire anyone who had said certification unless it was backed by years of field experience even for junior jobs, and then we did solid reference checks with former employers just to make sure as well as grill them deeply on the topics that should have been covered by that certification.
There's a reason it's tough to get a degree, and that reason is to filter out the few who earn it from the many who (for whatever reason) don't.
You have a responsibility to that few to not discredit their achievements and effort by diluting the degree through reduction of standards.
Upvotes: 2 <issue_comment>username_10: I generally design my exams so that even a student with relatively limited understanding of the course can pass. I would really only consider a student who has achieved the highest grade (> 80%) to have a good understanding of the material, and I often see just-passing exams (50-60%) where I think that the student really had no idea, but somehow managed to scrape together enough marks to get over the line.
Armed with that viewpoint, when I see a student who fails, I think they must have really deserved it! So I don't feel too bad about failing them; they need to put in some effort to fail my course!
Upvotes: 0 <issue_comment>username_11: Here in Germany universities are quite strict. After my daughter failed the very last maths exam three times, she was out.
However, strict does not mean heartless or planless. Her uni had an arrangement with the tech unis; one of them agreed to accept the exams she had already passed and enrol her in a similar course, for the third year. She was not alone in this.
This way all that time and investment was not lost.
You could organise a similar scheme with nearby polys or tech colleges, to take up your students, who didn't quite manage, but are still worthy of education. So everybody wins.
Upvotes: 2 <issue_comment>username_12: I think you are trying to assume too much responsibility.
The way you grade your students (assuming no malicious intent on your part) is not what is (possibly) causing their ruin. What might be causing it (besides, perhaps, some bad decisions of their own) is the system which forces them to go into debt or does not provide them with sufficient opportunities to study properly (e.g. by having them go to bad schools, or forcing them to work long hours in a part-time job).
You are not directly responsible for any of this.
Indirectly, you can try to change the system. I can think of a couple of ways to do it:
1. You can support (or even start) political initiatives which aim to change what you consider to be unfair.
2. If you think your job is part of the problem, you could quit, and look for a different job which would have less of a bad impact (e.g. a worse paid job at a place with lower fees).
3. You could do something to actively sabotage the system.
I think your suggestion (passing students who don't really merit a passing grade) falls under the sabotage category. Indeed, by passing students regardless of their merit, you undermine the whole system --- the more teachers at your institution do that, the less value a passing grade will have. At the logical conclusion, the whole higher education system would collapse, and something better could come up from the ruins. Maybe (but probably not).
Either way, in the interim, this would harm all the *other* students (who do merit a passing grade) by diminishing their accomplishments, other people (when the "passed" ones get a job they are not competent for, perhaps instead of someone more qualified), and possibly the ones you wanted to help (by pushing them to an ill-chosen career path).
I think ideally, the first solution I have mentioned should be the best, and sabotage should only be used as a last resort.
Upvotes: 1 <issue_comment>username_13: All things in life can cause you economic ruin these days. House mortgages, car loans, hacking, frauds.
It is basically a society based on debt-based servitude where the banks are our overlords. You are likely just one well-meaning person in this huge mess and likely cannot affect this fact. As far as I know you also can not affect what these students future employers will base their decisions on.
It is not your moral obligation (nor is it even within your power) to try and steer where these students end up. It is the privilege of these debt overlords to do.
Your job is to teach and to grade. Do your job, peasant.
Upvotes: -1 <issue_comment>username_14: The degree that you'll help getting will **only benefit the students who will use it to get a job they are not truly qualified for**. If a student gets a job in a different area, they'd do just fine without the degree. If they eventually become good at what you were supposed to teach them in the first place (by getting education elsewhere or getting work experience on a lower position), that's also doable without a degree.
A degree is essentially a certificate which says: don't keep this person mopping floors for three years, they already know what they need to know, and are ready to take responsibilities. Do you think this can be said of a student who is nevertheless going to fail your class?
And again: failing a degree is not a death sentence. Nobody's ruined and doomed for life because they have college debt and no degree at age 25. Someone who was in debt for their entire life typically haven't just picked a wrong class in college: usually there's a pattern of bad decisions throughout their whole life.
Upvotes: 3 <issue_comment>username_15: You on on the front lines of the defense of the value of your institution's degree. If you water down the degree, then potential employers of your students are not guaranteed quality, and those students that deserve passing grades will bear the result.
You do have the responsibility of making sure you're teaching to your best capability, as others have pointed out.
Upvotes: 0 <issue_comment>username_16: Being put on academic probation was one of the best things that ever happened to me. I switched to community college, took art, music, and math classes, learned who I **really** was instead of who I **thought** I was, and eventually graduated with a Bachelor's and have had a lovely career doing things that are truly meaningful to me.
The bad grades I "earned" in my first two years at a university were trying to tell me several things: I was not at the right school. I was not in the right major. I was not in the right place in my life mentally and emotionally. But the grades alone didn't help me understand. The letter I got that said I was on probation was what I needed to get to understand that I was doing the wrong things. If professors had just passed me because they felt for me, I would probably be far less happy with my life right now.
I would argue that students are paying you to give them the feedback they need on their work and views, they are not paying you for credits and/or a degree.
Upvotes: 4 <issue_comment>username_17: Some of the discussion surrounding this question demonstrates that people are viewing it through two different implicit lenses, and I'd like to make them explicit.
* **Considered as an act in isolation** A single decision to give a student a "kindly" mark has quite diffuse negative consequences. As long as we assume that the rest of the student's record is accurate, the harm to students who have legitimately passed the course, to the institution awarding the credential, and to the organizations that will later trust that credential as a mark of suitability are all very modest. Down in the weeds, really.
* **Considered as a pattern** If we assume that "give the poor kid a break" is a social norm that could be applied over and over again to a single student, then the picture is different. Some subset of students will be awarded credentials that don't mean what they say on the label, their subsequent failure in work environments will drag down the reputation of the institution and with it the value of degrees *earned* by students who didn't get a bunch of soft passes. And the damage starts even earlier than that, because professors in second year and later courses will have classes coming in less prepared than they should be and will have to give time over to remedial explanations and hand-holding to the detriment of progress in their course.
Several times during my career I've agonized over the kind of decision facing John, and I'm hugely sympathetic. I feel for students who have gotten themselves into a financial bind they don't have the where-with-all to haul themselves out of. But by the time a student is standing in my office explaining that if they don't pass my class they won't graduate and, and, and ... they have already have years of warning signs. That can't be on one professors' head.
More than once I discretely (no names) sounded out some of my colleagues about a student only to come away reasonably convinced that the subject had already been given a break; probably more than once.
---
As an aside I'm convinced that a significant amount of blame for the binds the students get into in the US lies with a financing system that "just grew that way" through a serious of short-sighted and frankly stupid decisions made by politicians who were personally isolated from the consequences. Both major parties have been vastly wrongheaded in their own ways, but the damage is worse because of they ways they have compromised; the old joke about bipartisanship leading to stupid-evil legislation applies.
Upvotes: 3 <issue_comment>username_18: >
> How do professors reconcile their de jure role as guardians of academic integrity with their de facto role of being (at least in part) responsible for their students' economic future?
>
>
>
This is an interesting question, and the premise in the part I quoted is good. The distinction that should be made when considering the whole of the question is that professors may have such a guardian role, but not in their grading duties.
Indeed, professors (and many people in the academic system) have some power to minimise the impact of failing students (on those students and society) but this should not be done by passing failing students.
---
Instead, there are many ways to ensure it won't come to that (or at least minimise the failing rates).
The best way to do that (as a professor of a course) is to make sure students are well prepared. While you may not want to set high entry requirements, you can provide a lot of information on what you expect students to know before starting your course.
For example, for a more advanced course, you should:
* Refer to **previous courses** that you will rely on. Make sure you explain which topics will be used. Try linking to the course page of that previous course (if at all possible) so students can look through old slides and exams to get a sense of what's required for your course.
* Explain what **other skills** are needed. Will you be using some obscure programming language? Provide such information beforehand so students might do some research on that themselves when they aren't that busy with other courses.
* Make sure this **information** is known. Put it on your course page, make sure mentors and advisers recommending the course to students know it too, so they better prepare students before they enter your course.
---
On a broader scale there are also things you can do. You aren't the only one with a conscience, discuss this with others in your department and do a bi- tri-monthly brainstorm (including academic staff but maybe also invite students). Discuss why you have to fail students and what can be done to reduce failing rates **without dumbing down** your exams and courses.
Make sure the results of those sessions are passed on to relevant bodies in or outside your organisation. Inform student representatives, relevant departments in your university and maybe even national organisations (when it comes to freshmen coming into your university) of the results. What can they do to make sure everyone has a better experience?
Upvotes: 1 <issue_comment>username_19: Going to college in a country with high tuition fees is a risk. A potentially very huge risk.
I think it is important to establish a scheme that makes sure that weak students notice this immediately. This means that in the very first year, it should become clear to the students whether they will make it or not.
Upvotes: 1 <issue_comment>username_20: As (most of) the other answers say, you should not do anything in your capacity as grader.
However, you are (presumably) not only an university employee, but also a human! And you could try to help as a human.
For example, are you involved in (university) policitics? Try to change the system so that poorer students receive more financial help. Can you volunteer for groups helping students in you neighborhood/family (e. g. watching kids of studying parents)? This would help a lot. Can you donate money to student unions or other things they need?
(For some of these suggestions (e. g. watching kids), be careful not to help your own students to avoid conflicts of interests. But helping other students is certainly ok.)
Upvotes: 1 <issue_comment>username_21: Here's an added perspective: If you start passing students for non-academic reasons (or really even just start spending mental energy considering that), there is *no lower bound* to that concern. Wherever you decide to set your threshold, there will always be students further down academically. There will always be some students to be "ruined", and the more your institution lowers standards, the further-down your population will drift over time, in like response to the lowered expectations.
This is notably coming from my position as a faculty member at a (large, urban, northeast US) community college. Nationally, [community colleges only have a 22% graduation rate after 3 years](https://www.communitycollegereview.com/blog/the-catch-22-of-community-college-graduation-rates) (our is somewhat higher than that). I've routinely witnessed courses from remedial level up with 50%-60% failure rates. (In contrast, 4-year colleges only have 60% graduation rates after six years.) Many of our students cannot read or write at an elementary level, handle the simplest elementary arithmetic, have emotional/intellectual disabilities, etc. There is no threshold we could possibly set that would serve to pass all or most of these students.
Example 1: I've had students ask whether they were guaranteed a passing grade as long as they physically attended every class session, and were incredulous when the answer was "no". (Apparently that's fairly common in some courses now.)
Example 2: A few years ago we had a university-wide remedial algebra exam for all students (e.g, at the 8th-9th grade level). I attended a central planning meeting where someone asserted something like, "Our goal was to make an exam that no one could possibly fail. We have not succeeded, because 50% of the students are still failing." (This was given as a positive argument for further reducing the standard of the exam.) Ultimately this was found to be an impossible endeavor, so the college has now abandoned the exam and the basic-algebra requirement entirely.
Example 3: In light of budget and enrollment pressures, among the university's new endeavors is to more widely expand advertising and enrollment to even more severely learning-disabled and intellectually-disabled prospective students.
The OP identifies a keenly-felt and significant problem. But granted my perspective, I might suggest that there is *no solution* to this problem. No matter where you set the threshold or cutoff, there are more (many more) students lined up further down (arbitrarily further down) the skill and intellectual ladder hoping for the same judgement. It's a cycle that has no hypothetical end.
Upvotes: 2
|
2019/05/15
| 1,341
| 5,659
|
<issue_start>username_0: Abstract: A professor in my department, who seems to be a climate change denier, is offering me funding for a project that is NOT related to climate change. Should I take it?
For my stipend at the my department, I have to teach a lot. This implies 12-15 hours of work per week for the next one year. It is a lot of time I could put into research/writing papers instead.
I have been offered funding to conduct research under a professor. The research question sounds interesting, is unrelated to climate science, and is probably going to lead to publications along with relieving me of teaching duties. However, the professor offering this funding says that sea levels are not rising at an accelerated rate.
Perhaps one extreme way of looking at this would be that he is a climate change denier. However, he did claim that he does not know about other indicators of climate change- he only pointed out that sea levels are not rising at an accelerated rate, which is a claim made by climate change proponents.
He said that NOAA data suggests this. However, on doing a simple google search, I found this article on the NOAA website- <https://oceanservice.noaa.gov/facts/sealevel.html>
To quote from the article, "Yes, sea level is indeed rising at an increasing rate. Global sea level has been rising over the past century, and the rate has increased over recent decades."
Hence, his source of information also seems to be spurious. Should I accept funding from him? It seems to look good for my life and career in general. But should I compromise on my beliefs and take funding from him?-<issue_comment>username_1: A lot of people you will come across have ideas that you would consider screwy if you only knew of them. But you don't.
Unless the idea repels you in some way, note that if the money isn't, itself, tainted and you aren't asked to do something you consider wrong, then there is no real reason to turn it down.
If the money was coming from a Coal Baron, you might want to refuse it. Likewise if you think you are being compromised in some way.
But hopefully the work you do will have some positive impact on the world in spite of any ideas of the professor.
Upvotes: 3 <issue_comment>username_2: It is ultimately up to you. However, if you refuse to collaborate with people who have different political, religious, moral, etc. standards or opinions than you, you'll pretty soon find that you are missing too many exciting opportunities. The disagreement over a climate change is a relatively mild one as far as such things are concerned. You do not need to change your position and you can even try to convince your professor that you are right when you have some chat over a cup of coffee, but I wouldn't take the issue too far. After all, from his perspective, he offers a collaboration to a proponent of some crazy politicized hype and does it first. So I would meet an open hand with an open hand in this case, but, as I said, nobody but you can make the choice.
Upvotes: 4 <issue_comment>username_3: In addition to the project being unrelated to climate change, he isn't someone who has any influence over policy except by voting. Like most culture-war issues, climate-change denialism isn't about what it claims to be about, which is why he doesn't care what the facts are and wouldn't be able to articulate what he *would* regard as convincing evidence of anthropogenic climate change.
This is a matter of identity: he imagines that he is an independent-minded skeptic who doesn't subscribe to the liberal agenda. There's no way to make a dent in that by refusing his funding and it will only reinforce propaganda about the intolerant left. So take his money and if you want try to reason with him if he brings the subject up. But this is almost certainly going to be a losing battle.
Upvotes: 2 <issue_comment>username_4: Maybe you should consider the possibility that his claim about sea levels is true.
He is a math professor, who makes this claim and says he does not know about other indicators of climate change. Based on that, I would guess that he knows a lot more about sea levels than you, and a lot more than can be found from a simple google search and a single short article on the NOAA website. Maybe he has looked at the actual data.
If you want to find out, you could ask the professor, or look for proper scientific articles about whether the rise is accelerating, or get the data and see for yourself.
Upvotes: 2 <issue_comment>username_5: You say that the research is "unrelated to climate science" but I'd want to distinguish between two rather different scenarios:
1. The professor's professional research program is entirely unrelated to his climate change positions. For example, let's say this professor does Number Theory.
2. The professor's professional research program touches on issues related to climate change, but the particular project you're working on doesn't relate to this. For example, this professor works on fluid dynamics including applications to atmospheric science, but this project is about the mathematical side.
In case 1 I think you should completely ignore this person's non-professional non-expert opinions, and go ahead and collaborate (assuming those non-professional opinions are along the lines of being a flat earther or ghost hunting, and aren't something like working for ISIS or having a confederate flag tattoo).
In case 2 I think you should avoid this collaboration because this professor's poor professional judgement may reflect poorly on you if people see that you're collaborating with someone who also publishes junk.
Upvotes: 3
|
2019/05/16
| 534
| 2,446
|
<issue_start>username_0: I recently submitted an abstract to a conference and got accepted. However, the full paper to be included in the proceedings has been rejected by just one reviewer with a comment of "manuscript of limited scientific validity". I still can't understand the reason and if this is fair to be judged by just one reviewer. Should I ask to reexamine it?
The topic is regarding the development of a numerical code where I compare the results with the theoretically expected ones. Therefore I do not solve a problem but show that the numerical code works,<issue_comment>username_1: It appears that the reviewer considered your manuscript not to be scientifically valid, i.e. they think the scientific method was not correctly applied or something to that effect.
"Limited" here may be used as a way to soften the message, or it may mean that the reviewer found some parts of the content to be scientifically valid and other parts to be invalid.
Conferences often use a light version of peer review with only one reviewer. Whether or not it is fair may be debatable, but there are often several conferences to choose from in a subject matter.
You can certainly ask this paper to be reviewed by another reviewer. It will be up to the editors (or ultimately the conference organizers) to decide this.
Upvotes: 2 <issue_comment>username_2: You write:
>
> The topic is regarding the development of a numerical code where I compare the results with the theoretically expected ones. Therefore I do not solve a problem but show that the numerical code works,
>
>
>
The question then becomes from the perspective of a reviewer: *What is it about the paper that is novel?* If I understand the quote above correctly, then (i) the numerical method you use is not new, (ii) there already exists some theoretical analysis of the method, and (iii) the problem you are solving is also already known. So this sounds like you implemented a code that can solve this problem and produced numerical results. But this is not new in itself -- other people have likely done this before, and unless your paper is about the *technical details* of this implementation and how they differ from how other people have implemented these algorithms, there really *isn't* anything new in your paper.
As a consequence, it seems entirely reasonable to me that your paper was rejected: Just doing what others have done before is not science.
Upvotes: 2
|