date
stringlengths
10
10
nb_tokens
int64
60
629k
text_size
int64
234
1.02M
content
stringlengths
234
1.02M
2019/10/02
1,183
5,434
<issue_start>username_0: I am very torn about whether to take the plunge to do a Master's degree. I graduated a few years ago with BSc in Maths which I didn't particularly enjoy, only the statistics and programming courses. I then got a job at a big technology company in their R&D department doing research and applications of data science. I thoroughly enjoy the work I do but I am also interested in developing my skills further. I want to gain a deeper understanding of artificial intelligence and am also interested in spatial data science. I am not sure whether I should continue in this role or at a different company and learn on the job or invest some time and money into full-time education. I am also not sure whether in the future with only an undergraduate degree I would be able to progress in my field. I am also concerned about age, whether it's wise to go back to university at 25.<issue_comment>username_1: Tell them at the start that there will be several requests and give a hint about how many, if possible. This lets them prepare a general response that can be sent to several places. But it is a bit better if the letter is tailored to the recipient, so... For each individual request, tell the letter writers what you think it might be good to emphasize, provided that you think it is important for that recipient. This lets the letter writer modify the basic form a bit to suit you better. Some professors, and maybe others, ask that you write the general draft yourself and send it to them. This lets you include the things you think are important in general. It is probably better that letters come from individuals who know you, rather than from offices like HR. Upvotes: 0 <issue_comment>username_2: Let your supervisors know well in advance how many places you're applying to and crucially, how they will need to submit their reference. Some institutions will email your referees requesting their reference, others may have an online system they will need to log in to, etc. Try to compile all this information beforehand in a single email/document and give it to your supervisors. You could also include the application deadline and some information about each institution, if this will help them tailor their reference letter. Then, let them know once you've submitted the application, so they will be expecting the email requesting the reference. I would not leave this to HR; if the letters don't get sent it may be tricky to chase up. However, if you supervisor forgets to send one, I'm sure it would be very easy to drop them a reminder email or have a chat over lunch. Good luck with the applications! Upvotes: 0 <issue_comment>username_3: > > If I apply to 5 different grad programs, will my supervisors have to > email the same letter to 5 different institutions? > > > Unless they write customized letters for each institution, then **yes.** Or more likely, they'll have to email their letter one institution and fill out four different forms on four different web sites. At least one of those web sites will require setting up a new account with a secure password (containing at least one upper case letter, one lower case letter, one digit, and one special character, but no spaces, hyphens, asterisks, or emoji) and two-factor authentication. Four of the institutions will email instructions for submitting the letters to the email address you provide; one will assume that you've already given the submission instructions to each of your references. One of the reference web sites will require a recent version of Java; another will silently fail if an ad-blocker is active. (I'm exaggerating, but only a little.) Some of your references *may* be able to delegate the submission process to someone in HR, or to a clerical assistant, but everyone needs to agree on the precise protocol well in advance, so that you can include the correct contact information in your applications. As username_1 suggests, be sure that the actual letters are written by people who know you personally, who have the technical expertise to judge your suitability for graduate study, not by a random person in HR or by a clerical assistant. I also agree with username_1 that letters that are customized to each institution are stronger, **provided the customization is truly substantive**. If there are clear significant differences in emphasis at different institutions—for example, one department that focuses on biology, versus another that focuses on manufacturing—then you should communicate those differences to your letter-writers. (That said, the vast majority of recommendation letters I read for successful graduate school and faculty applications are *not* customized, so customization may not be *necessary*.) Finally, **do not, under any circumstances, write the letter yourself.** You do not know how to write a strong recommendation letter. You certainly do not know how to write strong recommendation letters in three different voices, all different from your own, which are consistent with previous letters "written" in those voices. I think even *drafting* the letter yourself is dangerous. If one of your references needs more information about your background (that isn't already adequately described in your statement of purpose and your CV, which of course you've already shared with them) to write a strong letter, talk to them face to face. Upvotes: 4 [selected_answer]
2019/10/02
691
3,035
<issue_start>username_0: Let’s take a very hypothetical situation. Suppose that a scientific journal (let’s use Scientific Reports for the sake of the example) recently published a paper that seems to be completely off track. By completely, I mean that the phenomenon described in the paper can be completely explained as a technical bias that has been ignored by the authors. Some papers have an official means to address such concerns. For example, Nature has Matter Arising and Replies formats which can be used to engage in post-publication critics. The thing is, Scientific Reports does not seem to have such a mechanism. I was wondering about the polite and efficient way to address such issue.<issue_comment>username_1: If it is obvious to you that the paper has a strong bias, then either (i) you are wrong in your opinion, given that apparently all reviewers and editors missed the bias, or (ii) the bias will be apparent to everyone (except the reviewers and editors). In the first of these cases, there is nothing for you to do except doing some introspection on how you could be so wrong in your assessment. Talking to colleagues about the issue might help. In the second of these cases, there is also nothing for you to do: The issue is obvious to every potential reader, so it's not necessary to point out. (You *could*, of course, try to point it out, but why rock a boat that everyone is already seeing as sinking.) Upvotes: -1 <issue_comment>username_2: You have to live with the fact that academia is an imperfect system like any else and scientific knowledge underlies evolution. Seeing the bigger picture, in the long term journals that don't allow or implement criticism/replies on their journals papers will likely vanish. Your (very hypothetical premise) is simply very wrong that you have to find a way to correct wrong/bad content (in your opinion) in every journal. Technically, this is the same answer as Wolfgang gave, but from a bigger different perspective. You can also see it like this: *Scientific reports* doesn't believe that in the long term negative comments on their articles help to avoid more the publication of falsehood than for other journals or just creates "editorial noise/extra-work" (unworthful extra-work probably being the main reason to not offer this option). Do you rememeber facebook offering a downwote button, now it's gone. Or look at stackexchange, upvote +10, downvote -2 karma. When you have scientific branches like biomed/psychology in which over 70% of studies cannot be reproduced, it's questionable if article replies *overall and in the long term* have much value as this evolution is driven mainly by affirmation, reproduction by other groups instead of scientific discussions in a journal, as the system to create true new scientific knowledge works differently, less efficient and theoretical than in branches like mathematics or physics, which is not surprising if you have a bit of interdisciplinary methodological knowledge. Upvotes: 3 [selected_answer]
2019/10/03
593
2,623
<issue_start>username_0: I'm curious what happens if a department runs out of space - e.g. they are expanding, but they don't have enough rooms for their faculty, postdocs, visitors, etc. It's presumably possible to squeeze a bit, put many PhD students into the same room for example, but this would be a short-term solution. The obvious thing to do seems like to petition the dean to allocate more space to the department, but this seems very nontrivial. For example say three departments currently occupy 3 floors each in a 9-floor building. If one department needs more room, the clearest way to do it is to give them another floor, but that would force the other two departments to contract and/or shift some of their personnel to a different building, which might negatively impact their performance. Similar things must surely have happened multiple times in the past. What usually happens?<issue_comment>username_1: There is no "usually". There isn't a worldwide academic rulebook that states what happens when a department runs out of space. This is extremely dependent on the university, the department, who runs each, the current power balance between the different departments and the university, the university's budget, the culture of the research field and the country... Academia isn't different from the rest of the world, how would you even start to answer a question as vague as "what happens when a company's department runs out of space"?! Upvotes: 3 <issue_comment>username_2: As someone whose department had the exact problem in the past I can say that things tend to get a little hairy. One issue is that different groups usually have their own offices and while most groups may have a lack of desk space, some have extra. The group leaders with extra space tend to be very territorial about it for various reasons (feelings of being taken advantage of, personal conflicts with other group leaders, anticipating new arrivals but not having anywhere to put them etc.). The interim solution was to put up new desks in really awkward spaces (in corridors, under staircases) and splitting groups across multiple offices whenever group leaders could come to an agreement. Eventually the department managed to secure additional offices in another building and squeezed a few groups out. I was part of one of the outcast groups and as you pointed out it was in fact very inconvenient as we had to use the old department building for experimental work. Over the course of two years or so though, a few professors retired and their groups dissolved so they freed up some space in the deprtment. Upvotes: 2
2019/10/03
1,098
4,479
<issue_start>username_0: I'm a lecturer at a small university. In a class I taught last year, one grad student disagreed with his grade and complained as high up in the university as possible. The result was the grade stayed the same. I see this student on campus from time to time, and he usually ignores me, which is totally fine. However, this week I saw him on campus and as he walked by he said something insulting. Nothing awful but certainly disrespectful (along the lines of "you suck"). The student was also disrespectful during class. I'm thinking about discussing this with the chair of his department. Is this an over reaction? What would an appropriate response be (if any)?<issue_comment>username_1: I'd probably document it (make a dated note to yourself someplace noting what occurred and exactly what was said, while you still remember it) and then ignore it, unless it becomes a pattern. The purpose of documentation is to start a record in case the behavior turns into something more regular. The exception I would make is if you have any reason to believe this student is abusive/insulting to anyone vulnerable or deserving more protection. For example, if the commentary is racist, sexist, homophobic, etc. If it's more of a "(expletive) you" I'd let it go. Upvotes: 6 <issue_comment>username_2: Normally you can simply ignore such things. People like that mostly harm themselves and other students are unlikely to support them in their bad behavior. I was once publicly, but anonymously insulted and didn't respond. My students jumped in to explain to the miscreant why they were wrong. Problem solved. However, you have a right to respect in the classroom. If they behave incorrectly you can ask them to leave, especially if it is disruptive. I think that you would get wide support for ending disruptions, both with the other students and with administration. But, yes, as [username_1](https://academia.stackexchange.com/a/137961/75368) says, document it. Upvotes: 4 <issue_comment>username_3: Unless he is actively stalking you outside of classes, I would just ignore his comment on the campus. The problem will go away when he leaves college (either voluntarily or involuntarily), or when he grows up, whichever happens first. On the other hand if his behaviour is unacceptable in your classes, you are in a position of authority and you need to maintain that authority for the benefit of all the students. If he does it again, I would give him a public verbal reprimand, and remind him (and the rest of the class, of course) of what powers you have - I assume that includes the right to remove him from the lecture room either temporarily or permanently. If he is bone-headed enough to escalate the situation, let nature take its course - evidently he didn't win his first battle, and the details of that will already be on the record somewhere. Upvotes: 3 <issue_comment>username_4: I'd definitely document it like Bryan said. Depending on how I imagine the comment, it could be kinda scary, in an unhinged way. If it wasn't that scary, though, allow me to project a bit onto this guy. I've been in classes before where I felt I didn't have a fair chance to do well, and been mad at the professor about it. If this guy has been harboring resentment and every time you pass each other, you awkwardly ignore each other, I could see that resentment growing, especially if the student is having a rough go of it in life (which, being a grad student, he probably is). In fact, I've had that experience too. There's a professor whose class I disliked my first year who figured out that a rather criticism saturated student evaluation must have come from me. (I was *really* trying to be constructive.) He didn't take it well, and to this day still never acknowledges my existence when we see each other around, despite working in my research area, and even living on my street. I'm not really mad about the class anymore, but this still annoys me. *You know me*, man-- can't you at least give me the nod? My point is, you might be able to de-escalate things just by acknowledging him. If you make some friendly small talk and show him that you didn't take it personally, it's possible that he will cool off and not make things so tense. Of course, it's also possible that my reading is totally off and he will just be rude, so be prepared for that possibility. But it may be worth a shot before going to his chair. Upvotes: 4 [selected_answer]
2019/10/03
767
3,157
<issue_start>username_0: I applied for a faculty position at a state research (not so intensive) school in June. Since the position requires both a clinical degree and a research degree, their application pool is not likely to be large. Then, in mid-August, I emailed a search committee chair to update my CV with a new publication, and the person said he would be in touch soon... At first, I got optimistic when I read his reply, but decided not to think much about it. In late August, they re-posted the position on their website... It was the same posting, so my application materials were still there... I thought I would get at least a phone interview opportunity but haven't heard from them since I sent my updated CV. It's likely that they think I am not good enough for their position or do not meet their requirements...? Or internal candidate? or preferred candidate? I really want to get this position...and I don't know what to do at this point. My spouse is also in academia, and this might be my only chance to live with him... Is there a chance that I hear from them this month or November? Or should I just stop hoping and try to find jobs out of state...<issue_comment>username_1: > > I really want to get this position...and I don't know what to do at this point. > > > There is not much you *can* do at this point. The search committee has your application materials, and will get back to you if the position is relevant. > > My spouse is also in academia, and this might be my only chance to live with him... > > > This might be in your favor. Did your spouse recently get hired? Could they try and negotiate for a spousal hire for you? It is possible that you may be able to negotiate a position for you if your spouse's faculty wants them to stay. It may not be the most ideal position for you, but it will be something. After all, it is more than reasonable that your spouse leaves if you aren't unable to get your career started at your current location. If they want to avoid that, they need to help you figure it out. > > Is there a chance that I hear from them this month or November? Or should I just stop hoping and try to find jobs out of state... > > > You might never hear from them, you might hear from them tomorrow. We can't answer that. However, you should most definitely keep looking for alternatives, and assume you have nothing from them until they send an offer letter for you to sign. If I were in your position I'd look for positions wherever I can. I would also explore how open my spouse is to relocating if need be. Good luck! Upvotes: 2 <issue_comment>username_2: At a minimum you need to ask whether your application is still active. Don't assume that it is, though it may be. But reposting the position may mean a lot of things including the possibility that none of the current application pool is desirable and they want to start over. If your application is no longer active, you may just need to submit it again. There may be other reasons for the reposting, including a simple mistake or an administrative requirement. But if you don't ask, you may never know. Upvotes: 0
2019/10/03
879
3,712
<issue_start>username_0: Because of Brexit issues with my large grant, I'm considering a move from a permanent research+teaching post at a university to one of Europe's large research institutions. I'm not particularly familiar with these entities and I'm finding it difficult to gauge the long term implications of this move. Can anyone tell me: What are the key differences between basing research at a university and basing research at a (large, European) research institution? What might the long-term effects on one's career be due to making the switch, e.g. from not teaching or supervising phds? For the record I'm not STEM, I'm a social scientist who does a lot of humanities/arts work.<issue_comment>username_1: > > What are the key differences between...research at a university and...research at a...research institution? > > > At a university, researchers may have teaching duties (as in the OP's case), whereas researchers may not have such duties at a research institute (albeit, institutes may vary), hence, **a research institution (without teaching) provides a more focused researched environment, compared with a university (with teaching)**. > > What might the long-term effects on one's career be due to making the switch, e.g. from not teaching or supervising phds? > > > As above, not teaching facilitates a research focus. (Moving to a teaching-focused institute later might be more difficult. (Moving to an institute with teaching less so, assuming they focus on your research results, rather than your teaching.)) As discussed in comments, **you can supervise PhD students**. Upvotes: -1 <issue_comment>username_2: I don't think that a generic answer would be valid for every European continental country as University careers are already quite different from country to country. As you are mentioning CSIC I am going to focus in Spain. **Short answer:** Making your career at CSIC or any other Spanish research institution means that you have the opportunity of focusing on your research in a high quality institution. It also means that the possibility of changing to a University (again in Spain) decrease the longer you stay there, virtually becoming zero after some five years **Explanation:** Spanish academia requires that you go through a system of pre-qualification (*acreditación*) for any of the four current faculty positions. Other countries such as France (*Qualification*) and Italy also have a similar system but I believe that the particularity of the Spanish case is that you need to fulfil a lot of different requirements in teaching, research, knowledge transfer, etc. You can check the requirements in this [link](http://www.aneca.es/Programas-de-evaluacion/Evaluacion-de-profesorado/ACADEMIA) (sorry, no English version). Being able to fulfil these requirements is relatively straight-forward for the first faculty position (*profesor ayudante doctor*) but it becomes increasingly difficult for the next ones **if you are not employed at a University**. It has been often quoted that a Nobel prize not previously employed at University would not be able to become tenured in Spain regardless of the quality of their research. As you mention being currently in a research + teaching position at the UK, your previous teaching + supervising experience **might** allow you to become qualified (*acreditado*) for faculty positions. But both Spanish universities and the organism that evaluates research (ANECA) are extremely picky with the certificates that you need to bring from your home university. It is possible that some are deemed not valid because they are not *signed and stamped* by the right person, etc. Upvotes: 3 [selected_answer]
2019/10/03
663
2,757
<issue_start>username_0: My question is highly relevant to this one: [is-it-appropriate-to-acknowledge-stackexchange-in-my-msc-thesis](https://academia.stackexchange.com/questions/23231/is-it-appropriate-to-acknowledge-stackexchange-in-my-msc-thesis); however, in my case, <NAME> is the **pseudonymous person (or persons)** who developed Bitcoin. I believe, would be great to acknowledge Satoshi genius and the importance of his invention to my current research, but not sure whether appropriate (in academia context) is to thank the pseudonymous person(s)?<issue_comment>username_1: I think it would be appropriate to thank him in a thesis, but not in a paper. ============================================================================= There is a big cultural difference in how acknowledgments are used in theses and papers. In the common use (at least in my field, but I think this is general), acknowledgments in a paper are reserved for: * people that actually helped on the topic of the paper, via direct personal interaction, for instance "We thank X for useful discussions" or "We thank Y for providing us the reference [5] and an alternative proof of Theorem 5" * funding agencies * host institutions for a visit, such as "This research was conducted when the first author was on a sabbatical leave at the University of W; she thanks UW for its support". Other acknowledgments (such as general help, emotional support, parents and partners, etc.) would definitely look out of place. You can put these in your thesis, but in a paper you'd better stick to business and keep it as brief as possible. The acknowledgments section of a paper is not intended as a soapbox (unlike that of a thesis). There are exceptions, but they will raise eyebrows. Upvotes: 5 [selected_answer]<issue_comment>username_2: Why don't you just cite [their original bitcoin paper](https://bitcoin.org/bitcoin.pdf) in your introduction? I don't know the history of your field but if that was really a fundamental result it seems completely reasonable to cite it. Whether the author is anonymous or not doesn't change that. According to google scholar the paper has over 7000 citations so you would have lots of company. In general I would say a citation that acknowledges the importance of the work is a much stronger form of thanks than a mention in the acknowledgements section. Upvotes: 5 <issue_comment>username_3: No one acknowledge any scientist leading/having led to a field with no direct interaction on the paper. So no. I would find it inappropriate even in a thesis. You have other sections to highlight the importance and merits of someone, and you do so in a scientific and pertinent way, by referencing his/her work, for instance. Upvotes: 2
2019/10/03
311
1,270
<issue_start>username_0: I have enrolled as a PhD student in one of the UK's universities. I have been received an email containing a list of different professors and secretaries in the department. I am a little worried since I don't know how to greet them and I don't know what to say. Could anyone please help me?<issue_comment>username_1: Good morning Dr Jones or Good afternoon Professor Smith usually works fine. And, based on comments, Mr/Mrs Smith or the first names as applicable (and that can be the faculty/professors as well). However, making the effort to talk to those who look after the infrastructure at any level does pay dividends... Even when equipment is being got rid of by another department you can get to hear of it first and "appropriate" it... :) Upvotes: 1 <issue_comment>username_2: I disagree slightly with SolarMike's answer. The academic hierarchy in the UK is very informal. It is fine to use first names for everyone, be they professor, admin staff or cleaner. A simple "Hi Bob" is fine in emails or in person. I would argue that it would actually be a bit strange to address someone by their title and last name. If you did so, I expect they would quickly tell you it's fine to address them by first name. Upvotes: 3 [selected_answer]
2019/10/03
720
3,064
<issue_start>username_0: I'm a college student attending a very prestigious university. I did an independent research during this summer, and I recently submitted my conference paper to an international conference. The acceptance letter was very fast (2-3 days), and after doing much research, I realized that the organization is in Beall's list. The DOI number of the conference paper already came out and it seems that my paper is going to be published in a conference proceeding (it's going to be on google scholar as well). I don't want to hurt my career by going to a predatory conference, but at the same time, I think it's going to be a valuable experience for me to attend a non-undergraduate conference, as it is the first time for me. Does anyone have any suggestions about what I should do?<issue_comment>username_1: Don't go. You write that it's a valuable experience to attend a non-undergraduate conference, and it would indeed be the case, but you won't be attending anything like a non-undergraduate conference. [Example story](https://www.technologynetworks.com/tn/articles/inside-a-fake-conference-a-journey-into-predatory-science-321619), and that actually reads like one of the "better" predatory conferences, since I've read about other predatory conferences where keynote speakers don't show up and attendees talk about completely unrelated stuff (e.g. a number theory talk at a food science conference). This kind of event won't help your CV either, because if someone checks the conference out they'll discover that there's no quality control, meaning the quality of whatever you presented is now suspect. Since you gain nothing by going, is there still worth going? Possibly, but any reason for going would be much less tied to the conference than to other things, e.g. perhaps you know someone who lives near the conference venue and have been looking for a reason to visit. If you don't have these reasons, you might as well save on the cost of travel. Upvotes: 4 [selected_answer]<issue_comment>username_2: The first step is to get a professor in the area of your research at your university to help you. This does sound like a scam conference. It sounds as if you found the conference yourself, without guidance. The tougher issue is the paper withdrawal. It probably is worth some effort to do this. If what you sent in is weak, having it out there might embarrass you later. If what you sent is was strong, publishing in this venue might block you from publishing in a better outlet or presenting at a good conference. At most legitimate conferences, writing to say you cannot attend (didn't your cousin just moved her wedding day) will stop the publication. If you have not send them any money yet, perhaps withholding money will stop them publishing. This is not something older professors dealt with during undergrad years. Browse this site and you will see that you are not the only one caught in this sort of scam. I like to think graduate admissions committees are getting used to seeing situations like this. Upvotes: 2
2019/10/03
1,923
8,371
<issue_start>username_0: The title has my primary question. My research professor whom I have known since the beginning of the calendar year has quite consistently canceled on meetings last minute, most often due to personal reasons or last-minute department meetings. Despite the difficulties this causes with my own and my research peers' schedules, it's somewhat understandable. But there's rarely any follow-up for makeup meetings; if there is, it never moves past us giving our research professor our alternative meeting availabilities. As an undergraduate, I wonder if this is simply to be expected at some level when doing research with a senior professor- is it? I enjoy the subject matter of the research quite a bit so I do not prefer to leave the team. Any advice is welcome on how to navigate this situation, if it is to be navigated at all, or instead accepted as is.<issue_comment>username_1: Some level of disruption is normal. The more intense the environment, the more disruption is likely to occur. As you note, much of this is unavoidable. Things happen in our personal lives that must be attended to (illness of a child, for example) and last minute meetings are outside the advisor's control. But, you don't need to just go idle when there is disruption. If you have a good working plan with team members then you can probably carry on when a meeting with the advisor gets cancelled at the last minute. Have a five minute "conference" on how you can make immediate progress. The do that. Or... Make a list of questions that need to be answered before you can continue. Things like that. When you do get to talk to them, ask for advice they might have on how to continue effectively when meetings get cancelled. You ask if this is normal. Some of it yes. Some of it is also exacerbated by the fact that it is a senior professor who has lots of time consuming and constraining demands. I hope you aren't their lowest priority, of course. But learning can occur in any case if you just figure out how to keep the team moving forward. Upvotes: 6 [selected_answer]<issue_comment>username_2: No, it's not normal. You deserve respect like everyone else. Canceling on you repeatedly at the last minute is disrespectful. "Personal reasons" can happen once in a while, sure. (Although at some point you have to wonder what's real and what's an excuse.) However, department meetings are never planned at the last minute. We are all extremely busy, and if you plan something last minute, the answer you will get from 50%+ of potential participants is "I can't, I teach/have another meeting planned/am not even in the country that day". Me and most of my colleagues try to adhere very strictly to already-planned meetings, and those who don't acquire very quickly a reputation of being unreliable. If a professor has scheduled a meeting with you and decides to go to a department meeting instead at the last minute, it's insulting. There are at least two possible reasons for the professor's behavior. 1. The first is that he has abysmal organizational skills. This can happen. Talking about it can sometimes help, but you have to be careful about how you phrase this. It's possible that the professor does not even realize that canceling last minute is terrible for your own schedule; some people are just self-centered and have a lot of trouble putting themselves in another's shoes. 2. The second is that you are so low on the professor's priorities, and the professor values you so little, that he doesn't care about inflicting this kind of behavior on you. This is not the kind of person you want to work with. Being busy and senior is not an excuse for being an asshole. But you are now reaching the point where you have to choose your battles. Do you want to complain to the professor and risk retaliation? Do you want to switch advisors and also possibly face retaliation or bad will, or work on something that is less interesting to you? Etc. There is a huge power imbalance between an undergrad student and a professor, so unfortunately I have to advise you to tread carefully. You can adopt palliative tactics such as the ones described in username_1's answer. But overall, no, it's not acceptable for someone to do this to you, and in an ideal world you would not put up with it. But we don't live in an ideal world. Academia has a bigwig personality complex, and many (not the majority fortunately) believe that they live on a superior plane of existence, wayyy above commoners, students, and underlings. Upvotes: 5 <issue_comment>username_3: No, it’s not normal, and your professor’s behavior is disrespectful and unprofessional. The fact that you are an undergraduate is not an excuse and does not justify treating you this way. I would suggest looking for a different research project to work on under another professor who treats you with the respect due to a junior colleague and a fellow human being. Upvotes: 3 <issue_comment>username_4: You should address your concerns to the appropriate representative body and ask them to raise them with the faculty. **Important** You need to establish clearly if this pattern of behavior is typical or atypical for people with similar responsibilities in the institute. Ask other groups for their experience. If at all possible try to get some stats together to support your case. If there is no (student ?) representation body that can deal with this then, *as a group* you need to formally contact the head of the department and assert the need for this issue to be addressed without delay. It's for them to work out a strategy to do that. The initial suggestion will (at a guess) be that you "talk to the professor". This of course is just a way to avoid responsibility. You have no authority to negotiate with this individual and they seem not to be able, for whatever reasons, to handle their workload. It's the department which needs to shoulder this responsibility. If the department refuses to act then you may need (again ideally via a representation group) to address this matter to a higher authority within the institute. No, it's not reasonable for this to *consistently* happen. The purpose of that institute is to teach and research. Mentoring is not an optional extra, it's a core function. The issues here are simple : * Meetings cancelled at last minute because other last minute things crop up This (if true) is departmental chaos in operation and while you might expect the occasional meeting to be cancelled for these reasons, they're not running a crisis management center for the police, they're running (or not running well) an institution where student requirements *ought* to be an integral part of the working day. There should be time allotted to this and that allocation should be respected. Exceptions happen, consistent issues should not. * Meetings cancelled for personal reasons Well this happens, but if it's happening a lot then, again, it's a departmental issue to properly cover the work. It's quite unfair to students (and others) to have someone whose personal life consistently interferes with core working responsibilities. Again it's a case of occasional being normal and a high frequency of these issues being a real problem that needs a real response from the department. * But there's rarely any follow-up for makeup meetings; if there is, it never moves past us giving our research professor our alternative meeting availabilities. This is another sign of either departmental chaos or individual chaos. There's a clear issue here that needs to be addressed. When you were taken on it was to mentor your development. If this is consistently not being done properly then it's a departmental level failure (at least). **Very, very important.** Lastly in fairness to the individual you need to note that they may be severely overloaded by an institution that is happy to exploit them (and as a result, your group). Do *not* seek to assign blame in any contacts with the institute - this sets up a confrontational situation they will feel (right or wrong) they should defend to the hilt. This is why a representational body, which likely has people more experienced in resolving these issues, is so important. You are *not* going to war, you are seeking a solution or solutions. That will almost certainly involve compromise. Upvotes: 2
2019/10/03
943
3,873
<issue_start>username_0: You worked on your manuscript and its language for a year and a half, carefully choosing every word; what is there left to do for the journal's copyeditors after acceptance? What is the point of copyediting your manuscript?<issue_comment>username_1: I worked for a year and a half on a paper and I dare say it was in good shape. Nevertheless the copy-editor found two embarrassing mistakes. In this case the copy-editor constitutes just another barrier for mistakes which tend to be invisible to the author after some time. ~~From a technical perspective one also has to consider the workflow of the respective journal. Some publishers use LaTeX to typeset their articles, some use other tools. In most cases (in my field), Microsoft Word documents and LaTeX are accepted. As a consequence, converting text and graphics may be necessary. In particular, assuring the graphics quality is quite an issue (I would guess).~~ Finally, as <NAME> already stated: There are submitted and accepted manuscripts with quite some quality issues ranging from typos to inconsistencies in typesetting. If your submitted manuscripts are in better shape, good job :-) Upvotes: 3 <issue_comment>username_2: The writer of a document is often the worst person to verify its wording. There are two reasons for this. Often, when we write something we have a certain mind set. But we aren't perfect, so we occasionally make mistakes. Then, when we revisit it, we make the same mistake again, "seeing" on the page what we thought we wrote, rather than what we actually wrote. Second, and more important, the author of a manuscript has a lot of background knowledge that they bring to bear on the subject that isn't actually written in the document. The reader, on the other hand, may not share this background, and usually won't to the same degree. An informed copyeditor can correct for both of these situations. I've learned that I can't really reliably proof my own writing and make both of the above "errors". But copy editors don't just make changes without the advice of the original author. They provide a new version and the author gets to approve, reject, or improve the "corrections". It is a very valuable service. But even just in the use of language. A good editor can improve the presentation of ideas by improving the structure of complex sentences and suggesting where the statements made could be confusing to a reader. It does require some subject level knowledge to do it well, of course. Upvotes: 3 <issue_comment>username_3: Here're some things a copyeditor can do to the text of this page: > > You worked on your manuscript and its language for a year and a half, carefully choosing every word; what is there left to do for the journal's copyeditors after acceptance? *Why is their work productive?* > > > This phrase is awkward since almost any work is "productive". *Why is their work productive -> What is the point of having copyeditors?* > > Often, when we write something we have a certain *mind set*. But we aren't perfect, so we occasionally make mistakes. Then, when we revisit it, we make the same mistake again, "seeing" on the page what we thought we wrote, rather than what we actually wrote. > > > *mind set -> mindset* > > I've learned that I can't really reliably proof my own writing and make both of the above "errors". > > > This sentence is ambiguous; it makes it sound like the author can't reliably make both of the above errors. *... I can't really reliably proof my own writing; I make both of the above "errors".* > > In particular, assuring the graphics quality is quite an issue (I would guess). > > > *...ensuring the quality of the graphics ...* I think I can honestly say I've never seen a journal article which didn't need at least some changes during copyediting. Upvotes: 0
2019/10/03
991
4,129
<issue_start>username_0: I am writing a thesis about a data analysis I'm implementing and am currently describing my data preparation where I talk about what needed to be done prior to the analysis (formatting the data, removing noisy data). Now, I was wondering if I should also include steps that only helped me to work more comfortably like creating multiple different sets of data with different variables? This has no direct impact on the analysis but I'm not sure if it still needs to be included.<issue_comment>username_1: I worked for a year and a half on a paper and I dare say it was in good shape. Nevertheless the copy-editor found two embarrassing mistakes. In this case the copy-editor constitutes just another barrier for mistakes which tend to be invisible to the author after some time. ~~From a technical perspective one also has to consider the workflow of the respective journal. Some publishers use LaTeX to typeset their articles, some use other tools. In most cases (in my field), Microsoft Word documents and LaTeX are accepted. As a consequence, converting text and graphics may be necessary. In particular, assuring the graphics quality is quite an issue (I would guess).~~ Finally, as <NAME> already stated: There are submitted and accepted manuscripts with quite some quality issues ranging from typos to inconsistencies in typesetting. If your submitted manuscripts are in better shape, good job :-) Upvotes: 3 <issue_comment>username_2: The writer of a document is often the worst person to verify its wording. There are two reasons for this. Often, when we write something we have a certain mind set. But we aren't perfect, so we occasionally make mistakes. Then, when we revisit it, we make the same mistake again, "seeing" on the page what we thought we wrote, rather than what we actually wrote. Second, and more important, the author of a manuscript has a lot of background knowledge that they bring to bear on the subject that isn't actually written in the document. The reader, on the other hand, may not share this background, and usually won't to the same degree. An informed copyeditor can correct for both of these situations. I've learned that I can't really reliably proof my own writing and make both of the above "errors". But copy editors don't just make changes without the advice of the original author. They provide a new version and the author gets to approve, reject, or improve the "corrections". It is a very valuable service. But even just in the use of language. A good editor can improve the presentation of ideas by improving the structure of complex sentences and suggesting where the statements made could be confusing to a reader. It does require some subject level knowledge to do it well, of course. Upvotes: 3 <issue_comment>username_3: Here're some things a copyeditor can do to the text of this page: > > You worked on your manuscript and its language for a year and a half, carefully choosing every word; what is there left to do for the journal's copyeditors after acceptance? *Why is their work productive?* > > > This phrase is awkward since almost any work is "productive". *Why is their work productive -> What is the point of having copyeditors?* > > Often, when we write something we have a certain *mind set*. But we aren't perfect, so we occasionally make mistakes. Then, when we revisit it, we make the same mistake again, "seeing" on the page what we thought we wrote, rather than what we actually wrote. > > > *mind set -> mindset* > > I've learned that I can't really reliably proof my own writing and make both of the above "errors". > > > This sentence is ambiguous; it makes it sound like the author can't reliably make both of the above errors. *... I can't really reliably proof my own writing; I make both of the above "errors".* > > In particular, assuring the graphics quality is quite an issue (I would guess). > > > *...ensuring the quality of the graphics ...* I think I can honestly say I've never seen a journal article which didn't need at least some changes during copyediting. Upvotes: 0
2019/10/03
2,722
12,201
<issue_start>username_0: I was in the evaluation committee for the adjunct lecturers this year, and it was a complete nightmare. There were over thirty candidates and many of them didn't have an account on Google Scholar, which made tracking their impact really hard, due to name collisions etc. My question is would it be a reasonable requirement to force the candidates to create a Google Scholar account, so that we can easily track their publication/citation record and impact? My hesitation is that it would require them to give away private information to a third-party company, and some people wouldn't like to be forced to do it, or even raise legal issues.<issue_comment>username_1: To some extent this depends on the field and where in the process you are (although I have to find it funny that 30 applications is considered severe; in math there are positions which get literally 300 or 400 applications). Here are some relevant considerations: Are you in a field which usually uses Google Scholar? Math for example doesn't almost at all(Edit: See comments by Dmitry here- I may be seriously wrong about how much it is used in math), but it seems to be common in some other fields. If one is one of the fields where it is common, that may make more sense. Is this is a position where research is going to matter? If you are hiring someone as an adjunct as you suggest, this doesn't seem like a research focused position, so why should it matter? What stage in one's selection process is one in? If for example one first selects out some of the candidates, and asks the remaining pool to do so, that looks a lot more reasonable. You can probably eliminate a fair number of candidates simply by not having strong CVs (and frankly it is likely if you are looking for a research position in a field that often uses Google Scholar that those people will often be the ones without Google Scholar profiles). Legal issues are complicated, and we can't really give legal advice here, but there are some potential issues that can be highlighted. The most obvious one is accessibility: is Google Scholar easily accessible for people with disabilities? If it isn't, this would be a potential problem. Are you at a state school or a private school? If a state school there are a lot more rules about hiring generally that need to be followed, and asking for something like this after the job has already been advertised with instructions on what to do will be a problem in some states. Note that in some respects for some of these issues this may also be the sort of thing where it is better to ask for forgiveness than permission: if you ask a university legal counsel if you can do anything that seems remotely questionable, they'll frequently just say "no." If you are in Europe some of the legal issues may also be more severe as they may interact with European data privacy issues, and that's a serious enough issue right now that if one is concerned about it, getting competent legal counsel may make sense, but you may have someone even in IT who can walk you through any relevant issues at an informal level. Now for my personal opinion: For what it is worth, if I were applying for a position and they asked me to make a Google Scholar profile, since we don't generally use them in math, I'd consider that to be a serious red flag about what the committee knew or how much the school was micromanaging hiring decisions. Unless it was a high profile school, at a highly desirable position, I'd almost certainly say no. And if I were to see it while applying for a position that was a primarily teaching position, my reaction would be extremely negative. I have seen positions that ask one to highlight which of one's research papers one is most proud of, and it might be a substantially more useful than trying to use some potentially gameable metric like this. Upvotes: 2 <issue_comment>username_2: I do not think it is reasonable to ask candidates to create a profile on any third-party platform. Particularly on a google service, taking in account that some proportion of web users have concerns about this company (as well as other large corporate data processing companies), and do not want to get on their radar if possible. Typically, it is sufficient to make it clear to the candidates what are the selection criteria for the post and let them find their preferred way of demonstrating that they meet those criteria. For example, if your criteria is number of citations, you can suggest Google Scholar as an acceptable evidence, along with WoS and others, ultimately allowing your candidates to choose the service they prefer. If you want to check impact, you need to explain what you mean by this (the definitions vary widely across different fields and countries). Note that impact typically is not measured by the academic citations, but rather by adoption of research in non-academic environment, such as industry, government policies, patents, etc. In the UK, the impact is a key performance indicator in the Research Excellence Framework. It takes Universities a few months to prepare and evidence strong impact cases. I am sometimes puzzled when I see an entry-level faculty post requiring candidates to provide a fully justified impact statement. Maybe it is possible in some disciplines, but in my area (numerical mathematics) I find it difficult to trace, demonstrate and fully evidence the non-academic impact. Upvotes: 4 <issue_comment>username_3: As much as I like Google Scholar, requiring candidates to create a Google Scholar profile specifically seems inappropriate. You are effectively saying you won't hire people that don't use Google. What you could do is make it an optional part of the application or you could ask candidates to submit something more vague like a "citation report" and suggest that a printout of their Google Scholar profile is sufficient for this. Upvotes: 7 [selected_answer]<issue_comment>username_4: Have you considered [ORCID (Open Researcher and Contributor IDentifier)?](https://support.orcid.org/hc/en-us/articles/360006973993-What-is-ORCID-) I have the same concerns about intellectual property protection issues around Google unfortunately. GoogleScholar is also quite discipline specific (as others have said here) and is banned in some countries (China, etc). So to endorse a product that exposes a scholar to legal ramifications in their country plus the risk of commercialization of their data is highly problematic. ORCID on the other hand is "**an international, interdisciplinary, open, non-proprietary, and not-for-profit organization**". ORCID aims to include every discipline and many publishers and their journals are now mandating ORCID sign-in for their journal logins. Most people do not know about ORCID so you can offer them information. Also, make sure they know to make their ORCID profile public. Unfortunately, if your applicants refuse to use ORCID, your choice would be limited. I am not sure whether you have a friendly and supportive librarian can confirm their publication record and piece together their impact factor before progressing them through the selection process? Upvotes: 5 <issue_comment>username_5: It is common, and reasonable, for employers to require job candidates and employees coming up for review to provide the employer with any information it needs to evaluate the candidates/employees. So certainly you can ask them to prepare readable, well-formatted publication lists, citation information, and anything else that lets you evaluate their impact and productivity. I don’t see how the lecturers could reasonably complain if asked to provide such information in a format of your choice. However, your concern about Google Scholar is justified. Requiring people to open Google accounts as a condition of employment is, at the very least, coercive and unprofessional, and will reflect badly on you. It seems not unlike asking employees to use Gmail email addresses for work because you are too cheap or lazy to figure out a better solution. Similarly, if the evaluation process was a “nightmare”, to me it suggests that the evaluation committee did not give sufficient forethought to asking the lecturers to provide the relevant information. The problem is not with the lecturers not using the tool you wish they used, but with your department not designing the evaluation process thoughtfully enough. Upvotes: 4 <issue_comment>username_6: Assuming your university has a subscription, Scopus is pretty good at giving you relatively comprehensive and up to date author publication and citation profiles. It's generally good at dealing with name conflicts. A few scenarios where it might fail: Academics who have changed names (e.g., by marriage). Academics with particularly common names who have changed institutional affiliation. Of course, academics can notify Scopus of these changes and merge profile data, but it can't be counted on. And as @Flyto, Scopus has fairly good journal coverage, but it may miss other important output (e.g., conference publications, some books and book chapters), which can be particularly important in some fields. I guess it all depends on how much you want to rely on it versus using it as an additional source of information. Upvotes: 3 <issue_comment>username_7: > > My question is would it be a reasonable requirement to force the candidates to create a Google Scholar account > > > You should recommend that candidates provide a Google Scholar Profile (not account). In practice, hiring committees are going to go look for a profile. You might as well let candidates know that is going to happen. You cannot force job applicants to do anything. They can always just decide not to apply. > > so that we can easily track their publication/citation record and impact? > > > Google Scholar is good for tracking publications. Beware that some people allow Google to add publications to their profile, and these are of often incorrectly added. Do not use it to judge impact, and beware that no citation counting system will be totally reliable. Upvotes: 2 <issue_comment>username_8: Yes, it is completely within reason to ask candidates to have a profile somewhere. What it's not appropriate is to demand they use Google specifically. Besides Scholar there are other alternatives like Scopus, ORCID (mentioned in other answers) or even creating profiles in sites like scholarly, researchgate, or academia.edu. You should leave it to them to choose whichever they like but perhaps suggest Google Scholar as that is what you'd be using to asses them. Don't force them to use an specific 3th party as that could even be illegal in some countries for considering such as 'coercion'. A little story. During my MBA one teacher made us use facebook to create a group to drop the homework there and such, so I had to create a profile there for that since I had avoided getting one, and after that class I have only used it a couple times for subscriptions. Still, I know some of my data is there. had the professor given us option, we would have use Google environment for the class and it would have worked. So yeah, a professor did asked us directly to use a third party. On another note, since you mention that it's for hiring, then yes, it is still appropriate to ask for profiles because many companies want to see social media profile before hiring. Upvotes: 2 <issue_comment>username_9: > > Is it reasonable to ask candidates to create a profile on Google Scholar? > > > Absolutely not. Google is an atrocious entity involved in mass commercial and governmental surveillance, political censorship etc. You really must not require people to use Google's services, legitimizing these practices. Now, to be practical - I'm not saying that you should demand the opposite. I mean, I use Google Scholar from time to time (though I wish I could avoid it completely). But you should definitely make an effort to stay away from the Google "octopus" of services and definitely not feed it more victims. I suggest you not even *ask* people to have a Google profile (Google Scholar or whatever other account). Upvotes: 1
2019/10/04
1,725
6,862
<issue_start>username_0: I am preparing a joint paper with my mentor. I am a math postdoc. How is the corresponding author generally chosen? What are the benefits of being the corresponding author? Is it better for a post-doc's career if they are the corresponding author?<issue_comment>username_1: My experience as a pure mathematician in the US is that there are essentially no benefits to being the corresponding author. When I was a postdoc and in grad school I was often the corresponding author, but I get the feeling it was mostly because I was often the most junior of the authors and the others didn't want to waste time navigating editorial websites, receiving emails from the journal, mailing people tex files, etc. I would say that if someone is a pure mathematician in the US, the number of times they've been a corresponding author for a paper will have absolutely no impact on their career. (I should add that I believe that in other countries the situation may be markedly different.) Upvotes: 6 [selected_answer]<issue_comment>username_2: The only "benefit" that I can think of of being the corresponding author - if you can call it a benefit - is that it prevents underhanded actions by coauthors. Sometimes, people from different groups and motivations, who are possibly at odds with each other, end up co-authoring. Or - an estranged pair of advisor-advisee. Some of the authors might suspect the others of being willing to compromise the paper somehow (obviously I'm being vague since this is inspecific.) The corresponding author, however, has control over communications with the venue, so s/he can exercise effective veto power over steps s/he disapproves of. Upvotes: 2 <issue_comment>username_3: As <NAME> points out, the situation can be very different in other countries. In many places in Asia, for example, the significance of a publication on one's CV goes like this:      publication as first author ≥ publication as corresponding author > publication as any other author. The assumption here is that the first author contributed the most, while the corresponding author is the PI. Obviously, this should not apply to most mathematical publications where the authors are listed alphabetically, but university guidelines often do not take this into consideration... Some countries will even put a quantitative value to this statement, explicitly weighting your publications based on the authorship order (in addition to other factors such as the impact factor). Here's an excerpt from Taipei Medical University's promotion standards [(PDF)](http://tmu-hr.tmu.edu.tw/uploads/archive_file_multiple/file/5d0770b64f4d123bfe0022b1/_%E8%8B%B1_%E8%87%BA%E5%8C%97%E9%86%AB%E5%AD%B8%E5%A4%A7%E5%AD%B8%E6%95%99%E5%B8%AB%E5%8D%87%E7%AD%89%E8%A8%88%E5%88%86%E6%A8%99%E6%BA%96%E6%96%BD%E8%A1%8C%E8%A6%81%E9%BB%9E.pdf): ``` Rank of author weighted points (A) ---------------------------------------------------------------- first author or corresponding author 5.0 2nd author 3.0 3rd author 1.0 4th or lower rank author 0.5 ``` In some cases things get even weirder — here's an excerpt from the regulations of Seoul National University [(PDF)](http://rule.snu.ac.kr/internationalRules/15.pdf): > > Score of each research publication stated in Paragraph 1 of this article is as follows: > > 1. Single author: 100 points > > 2. Two authors: 70 points > > 3. Three authors: 50 points > > 4. Four authors or more: 30 points > > If applicant, however, is the first author or the corresponding author in a publication with three or more authors, he/she is entitled to 70 points. > > > Curiously, sometimes a corresponding author is worth *more* than first author. The Academic Ranking of World Universities, compiled by an agency in China and arguably one of the most influential international university rankings, uses [the following scaling](http://www.shanghairanking.com/ARWU-Methodology-2018.html): > > To distinguish the order of author affiliation, a weight of 100% is assigned for corresponding author affiliation, 50% for first author affiliation (second author affiliation if the first author affiliation is the same as corresponding author affiliation), 25% for the next author affiliation, and 10% for other author affiliations. > > > As you can see, there are often advantages to being corresponding author in such cases. Universities can offer additional incentives (e.g. monetary) for being first or corresponding author on high-impact publications, which often leads to more arguments and occurrences such as six "joint first" authors or, similarly, multiple "co-corresponding" authors on a paper. Anecdotally, I would say that in non-Asian countries, such weighting is also frequently done but on an implicit basis, and is significantly more field-dependent. (For instance, the Korean university regulations do not distinguish between different fields, and I am unsure if different departments can have different rules — if not, I encourage every mathematician named Aaronson to apply for faculty positions in South Korea immediately.) Upvotes: 2 <issue_comment>username_4: To answer your questions in order, and based purely on my own experiences (in pure math in the UK). 1. Some reasons for choosing a particular person as corresponding author might be: * they are more experienced in dealing with journals * they are less experienced and so need the experience * they happen to be less busy at the time of submission * they are more junior and less busy in general * they are more senior and so their affiliation is unlikely to change (and they're not likely to leave academia) * they volunteered * it's their turn * they are the most organised * they know how to format the paper to the journal's requirements 2. The main "benefit" is being in control of the process. The non-benefit is that it is extra work - even if it doesn't make much difference to who actually implements any changes the journal asks for, you probably still end up emailing the other authors to say what needs done, writing the response to reviewers, checking proofs, and various other small things. 3. No. The reasons for being corresponding author or not are very varied, and either largely irrelevant or things that are more obvious from other factors anyway, so it's not useful information. IMO it would look very weird to specify on a CV which papers you were a corresponding author on, and I doubt any committee would bother to find this out. It would perhaps be preferable to have been corresponding author at least once, but in pure math people will generally have some single-author publications and so this isn't an issue. Upvotes: 2
2019/10/04
1,638
6,316
<issue_start>username_0: Last year, I asked my physics professor a question that he did not know the answer to. This year, while doing a research paper, I also figured out the answer to the question I asked him. I want to send an email to my professor explaining the answer I found simply because he's a curious guy. Typically, if I asked something that the professor didn't know, I'd email them a few days later saying: > > Hi Dr. Professor, after some reading, I figured out the answer to why so-and-so happens. Here's what I learned. > > > But since it's been so long, he definitely doesn't remember what I asked him. So I'd like to remind him what my question was. However, writing: > > "Hi Dr. Professor, last year I asked you a question that you did not know the answer to, and this year, I figured it out. Here's what the question was, and here's what I learned" > > > sounds a bit insulting to me, because I'm writing that he didn't know the answer. What is a good way to share an answer with a professor without being "insulting"? Am I just overthinking this?<issue_comment>username_1: I think there is an easy way to phrase this to make it tactful, after all professors are often curious about learning new things too! > > "Hello Professor X, > I hope you've been well! Last year I asked you a question and we couldn't figure it out at the time but I've since come across an interesting answer and just wanted to pass along the info just in case you're curious. We were discussing Y ... " > > > Upvotes: 8 [selected_answer]<issue_comment>username_2: The polite way to do this in academia is to pose it as a question. This way, you show your humility. You acknowledge that the solution or answer you found might be flawed or incomplete. Also, you open yourself up to collaboration. Upvotes: 4 <issue_comment>username_3: As an alternative to Juan's answer (roughly in the same spirit), I suggest that you can phrase the email regarding the issue itself--since presumably the professor would also be interested in knowing the answer to the problem. Something like the following should suffice. > > Hi Professor XY, recently I learnt about [...], which seemed really interesting to me because [...]. If you recall, this is similar to what we discussed a while ago regarding ABC, which is what prompted me to look into this further. I thought I would send this to you in case you happened to be interested in it. Have a nice day! > > > The point is to focus on the part which the professor would also be curious about/interested in, and to not dwell on the fact that they weren't aware of the answer beforehand. Upvotes: 5 <issue_comment>username_4: "A few years (semesters?) ago we were discussing the issue of \_\_\_\_\_ and the specific problem that \_\_\_\_\_\_. I recently came across something that brought the topic back to me and discovered that \_\_\_\_\_\_. I wanted to share that with you and to see if you have heard of this as well." This way you are making it something of mutual interest and still being respectful. Upvotes: 3 <issue_comment>username_5: There are two risks here: * Sounding insulting (as you have noticed) * Being boring (by bringing up some trivial thing from a year ago) You should avoid both, because there is nothing to gain from emphasizing them. > > Dear [name], lately I've been working on my current [paper/research/assignment] and here's some updates on how that is going. We can talk in more detail in our next meeting. > > > Incidentally, I ended up learning [thing], which we had discussed in the past. It was interesting to find out that [implication]. > > > I think this is a better answer than the accepted one because: * Doesn't remind the professor that he "didn't know" (if it's been a year since, you may even be remembering wrong and perhaps he did know) * Doesn't sound petty by referencing something from a year ago * Isn't wasting his time with some random thing that hasn't been relevant in a year * Sticks to relevant, pertinent things that matter to the work that's here and now, not ancient history There is of course nothing wrong with discussing history. Sometimes there are [unanswered questions](https://en.wikipedia.org/wiki/Fermat%27s_Last_Theorem) that [linger for decades or centuries](https://en.wikipedia.org/wiki/Poincar%C3%A9_conjecture) before spurning great discoveries when their time comes. But a productive researcher should maintain focus on priorities. If this thing from a year ago was that important, a year ago wouldn't have been the last time you discussed it. So one has to wonder, if nobody's cared in a year, why should anyone start now? I think answering that question is the most constructive direction to go here. --- I wrote the above assuming you have a relationship with the professor already. If you are talking about an undergraduate instructor, the same principles still hold, but a better example template could be: > > I have decided to work on [problem], and I wanted to share my findings with you. I actually became interested in this problem due to a discussion we had during our [class]. I have found that [answer], which is [implications]. > > > Upvotes: 3 <issue_comment>username_6: I am XXX. Last year, we couldn't figure out(***state the problem that the professor did not know the asnwer***). While reading (***State the source of your answer so he can refer to***) this year, I realised(***state what you found***). I just thought it would be nice to share with you. Kind Regards XXX Upvotes: 2 <issue_comment>username_7: I would add a dimension: The amount of time spending on the problem: * Your professor probably thought a few seconds / minutes about it * You probably spend minutes / hours **and** from your text > > This year, while doing a research paper, I also figured out the answer > to the question I asked him. > > > it even sounds that it wasnt your main purpose to answer the question, but you found it while working on a research paper. So you could include that in your text: > > Last year we discussed an interesting question to which we didnt find the answer "ad hoc". I just wanted to inform you that i stumbled upon a clue towards the answer while working on a research paper. In case you are curious,.... > > > Upvotes: 2
2019/10/05
230
1,013
<issue_start>username_0: It is just to evaluate what is right and wrong in a script (i.e., an exam or problem sheet completed by a student), and making notes and comments on that?<issue_comment>username_1: Marking is grading but comments are not compulsory as it depends on the policy if the papers will be seen by anyone else after. I do an excel based exam and there are points for parts (calculations )but no comments necessary. "correcting" is also a word used... Upvotes: -1 <issue_comment>username_2: To mark is to evaluate student work based on the rubric established for that work which, itself, will be a combination of University policy, subject policy, and what has been presented in the course administrative documents. If you are unsure of what is expected of you with marking, it would be wise to discuss this with your head of subjects or course coordinator as you will need to conform to your university's policy on marking and could face issues if your marking deviates from that. Upvotes: 3
2019/10/05
663
2,807
<issue_start>username_0: Can I refer to DOIs without explaining what they are? I am writing for an audience of experienced academics.<issue_comment>username_1: My suspicion is that the vast majority of academics (in STEM fields, say) will * have seen DOIs being used many times in papers and online, * be able to discern your intent if you use the term DOI in a context other than in print with a URL next to it, * have absolutely no idea what the letters stand for, and * never have done any research (e.g., read the DOI wikipedia article) about what DOI's are, why they're used, etc. **Added**: In response to Strongbad's comment, let me summarize the above briefly. I believe that a large majority of academics will have seen DOIs being used and know that if they go to the URL they'll wind up at the paper they're looking for. I also believe that, if pressed, many academics would say that the point of having a DOI is to be able to provide people with a stable URL where the paper may be found. But I think that in most cases the latter response would be a guess because I think that very few academics have spent any time looking into DOIs in any serious way. Upvotes: 4 [selected_answer]<issue_comment>username_2: I wouldn’t assume faculty in general know *anything*. Faculty in a particular field or subfield have certain common knowledge, but faculty in general have little to no common knowledge since they’re trained in different fields, in different countries, and in different decades. Probably you’re on safe ground assuming they know the earth is a sphere, but probably not HTML, and certainly not DOIs. Upvotes: 2 <issue_comment>username_3: From personal experience, I know that you cannot assume that faculty (particularly older faculty) will know what a DOI is. While most younger established faculty will be generally digitally savvy and have encountered DOIs, ORCID, preprint archives, etc., that will not apply to all. * Some people are in narrow sub-fields where shifts to more integrated electronic publication just haven't fully taken hold yet. * Many older faculty are not particularly active in "normal" publications any more, either mostly just doing books and invited pieces or else being a senior collaborator who doesn't actually directly deal with the mechanisms of publication directly any more. My own personal anecdote is on a senior colleague who I often interact with, but who had never even heard of DOIs until I recently explained the system to them. This person also typically interacts with all of their publications on paper. They do use a computer to compose and not a typewriter, but that's about as far as it goes. You probably don't need to do an in-depth discussions of mechanisms, though, just add a phrase or sentence to get the gist across. Upvotes: 2
2019/10/05
672
2,779
<issue_start>username_0: I am working on a master-thesis research paper. Recently, I have chosen a research topic and found a related survey dataset to work with. However, just now, I have found out that other authors did the same work for the same country, but used a different survey and for years earlier. Now, I am confused about what to do. If I would carry on the research, would it be considered as a plagiarism, even though I will use another survey for later years, and work in a different setting?<issue_comment>username_1: Probably not. However, it is not possible to be certain without reading both studies. Upvotes: -1 <issue_comment>username_2: Plagiarism is selling somebody else’s idea as your own. Therefore as long as you properly cite the other paper, you are in no danger of committing plagiarism. The worst that can happen is that you are reproducing an existing study¹ instead of doing something novel. Whether this is a problem, is for you and your supervisor to decide, as you know the differences in detail. On top, suppose you hadn’t found out about the other study: In this case you would not have committed plagiarism since you cannot possibly use the ideas of others without knowing them. Had you published your paper in this case, you would perhaps² be guilty of sloppy literature research, but not of plagiarism. However, the evidence may (incorrectly) point to you being a plagiarist and thus you still want to avoid this situation. --- ¹ which scientists almost universally agree on being done too rarely. ² if the other paper is sufficiently easy to find Upvotes: 1 <issue_comment>username_3: Reproducing a published study is good science. It isn't plagiarism as long as you properly cite the earlier work and don't copy from it. It is especially good science if you have any doubts about the conclusions of the earlier study or about its applicability in new circumstances. But, to be published yourself, you will need to be clear about why you are doing this and what, if anything, is new and different in your study as compared to the old. That isn't an ethical issue, but just the fact that publishers want to publish the new and novel. If your methodology is the same, you need to say that and cite the original. If you have made changes you need to carefully explain them along with the reasons for the changes. If your conclusions are different you need to analyze why that is and explain it. But it can be good science. There is quite a lot of published science that isn't good. Often and especially if it was done by someone with an agenda beyond seeking the truth. --- The answer of [username_2](https://academia.stackexchange.com/a/138088/75368) gives a good explanation of plagiarism and how to avoid it. Upvotes: 1
2019/10/05
548
2,181
<issue_start>username_0: Internet sources seem to agree that, in the list of references, the title of an article written in a non-Latin script should be transliterated and also translated. Similarly, the title of the journal should be transliterated if necessary. My question is: **should non-English journal names should also be translated in the references?**<issue_comment>username_1: I'm just guessing here, but have the following suggestion. If the journal, itself, provides a translation then use that. But otherwise, use the formal name that the journal uses no matter the script or language. The name is likely trademarked and also likely known worldwide by its official name. So use that name. A journal regularly publishing in several languages, of course, might, theoretically at least, have official translations of its name. But otherwise, use the same form that they use. Upvotes: 2 <issue_comment>username_2: **There is no single way to do references.** There are many different formats (MLA, APA, Chicago, etc.) -- some publishers will specify a required format, others will not. Further, in many contexts, anything reasonably close to a standard format will not raise eyebrows. **Both [APA](http://blog.apastyle.org/apastyle/2012/12/citing-translated-works-in-apa-style.html) and [MLA](https://guides.library.yale.edu/c.php?g=296262&p=1974230) conventions seem to say no, however.** Both of these seem to give the article title in the (1) original, (2) transliteration, and (3) translation, but give the journal title in transliteration only. If you are citing a translation of an original article, there are separate rules for that. Upvotes: 1 <issue_comment>username_3: I believe the best ways to find out which journal name you should use in your citations are: 1. open the article PDF/fulltext and check if it has a "How to cite" or similar section. If it does, then use that! 2. if you have time: write to the journal contact email address and ask. They are the most interested to be correctly cited. 3. if you're in a hurry, use the journal name that appears on the bibliographic legend, usually found in the article pages headers or footers. Upvotes: 0
2019/10/05
273
1,139
<issue_start>username_0: As we know, today, citations of an academic get lots of attentions about whether his research is impfactful. I am just wondering what is the history of citation numbers becoming this important? When were they first calculated and whose idea was this?<issue_comment>username_1: I would like to draw your attention to similar issues in admission. GRE or SAT content have little to do with what one does in the University. However, they are used to reduce a large number of applicants to lower levels. In essence it is a measurable and comparable criteria to judge a person. Citations and impact scores, roughly, used to judge people swiftly. I believe, it can be wildly inaccurate and unjust. Upvotes: 1 <issue_comment>username_2: Science Citations Index (SCI) was the first reference source that provided a way to find the publications in which a scientific paper was cited. SCI began in 1964. SCI existed only in paper form until it went online as the Web of Science in 1997. Statistics like citation counts and the H-index really weren't practical until the turn of the 21st century Upvotes: 3 [selected_answer]
2019/10/06
522
2,197
<issue_start>username_0: Recently, one of my works have received the status of **Accept with Shepherd**. We have received many suggestions from the reviewers. I'd like to know in more detail in which real status is my paper and if it has been accepted or it is in a sort of minor/major review. The paper is for a workshop of a CS conference.<issue_comment>username_1: **Your paper will be shepherded.** The conference organizers will assign a contact person called *shepherd* to your paper, who guides you through a sequence of revisions (for further information, see the [question](https://academia.stackexchange.com/questions/109827/duties-of-a-shepherd-in-a-cs-conference) mentioned by @darijgrinberg, and [this question](https://academia.stackexchange.com/questions/48147/thanking-a-shepherd-in-acknowledgments)). The "accept with..." decision signals you that they really want your paper at the workshop, stronger than in the case of a major revision. But similar to a minor revision, there's still a possibility that your paper will be rejected, if you don't work cooperatively with the shepherd. Upvotes: 6 [selected_answer]<issue_comment>username_2: In my experience as a conference program chair, shepherding is usually applied to borderline papers, where the chairs think the paper has valuable material but flaws too serious for publication in its current form. Assigning a shepherd for the paper means that your acceptance is conditional on revising to address those flaws. It's still good news, however, in that the conference wouldn't be assigning you a shepherd unless they think that you *can* overcome the current flaws and produce an acceptable paper. Moreover, it is normally the case that you are invited to communicate back and forth with a shepherd to make sure that your revisions are on target---the conference wants you to succeed in your revision. What you need to do now is to communicate with the shepherd about your revision plan and make sure that what you want to do matches what they will consider acceptable. If you can find agreement on a set of acceptable revisions and successfully execute them, then your paper should become finally accepted. Upvotes: 4
2019/10/06
3,440
14,818
<issue_start>username_0: When I read a mathematical textbook at graduate level to learn and master a particular subject that I am unfamiliar with, I always struggle with the strategy on the exercises. Sometimes I spent one hour on one exercise problem without making any progress. Sometimes I skip exercise problems with guilt and worried that I may not have learnt the material well. I can think of the following possible strategies on exercises: 1. Ignore all of them. Just focus on reading the main body of the textbook. 2. Set a time limit. Say for every hour I spent on reading the main body, I spent twice much time on solving exercise problems. I try to solve whatever I can within two hours and skip the rest. 3. Never move on to a new session unless I have solved all exercise problems in the previous session. If I get stuck, spend hours working on it and if still no success, look it up or ask somebody until I understand the solution. 4. Spend a reasonable amount of time on each problem and if I could not solve it, read the solution from a solution manual. This strategy seldom works as for most books on graduate level, solution manuals are unavailable. What strategy do you recommend? I personal feel that Strategy 3 is the most time-consuming but it is also the only one that does not bother my conscience. But because it consumes a huge amount of time, I am not sure if it is wise to adopt it.<issue_comment>username_1: Disclaimer: My thoughts about this are somewhat unconventional, so keep that in mind. I don't have a high opinion of the sausage-link structure of (advanced) mathematics courses, where you get a chunk of "material," and then exercises on that material, and then another chunk of material, and so on until the term ends. Since nearly all courses (and books) use this structure, I think the students who do well with it tend to be overrepresented among grad students due to survivorship bias, but I don't think it's well-correlated with any meaningful measure of mathematical talent. So my first observation is: Don't sweat it. My second observation is: Some people just have innately less sequential, more integrated, and higher-context learning processes, which creates a mismatch with standard instruction. The bad news is that very few mathematics professors seem to be clued into this and/or have no intention of modifying their course structure to support a variety of learning styles. However, if you're working from a book, you can reverse-engineer and restructure as you see fit. So skip around, use multiple references, work on what makes sense to *you* at any given time. Above all, don't feel constrained by the structure of the book and use the exercises as a check on your understanding rather than a goal in itself. You don't have to do the exercises from section 4 right after reading section 4. You might get more out of it if you come back to them after section 7. This is going to take some experimentation and self-observation and not all books will be equally amenable to deconstruction. But we now live in an age where even fairly obscure topics will have multiple presentations that can be located with a few keystrokes. You just need to find what works best for you. Upvotes: 3 <issue_comment>username_2: This depends on the course and what you want to get out of it. Some courses you take because you are deeply committed to that topic and want to know everything and attain deep insight. Others seem less "fundamental" to you. For me, Analysis and General Topology were in the first category and Algebra and Statistics were in the second. Over time, I was able to attain deep insight into my key subjects but only a general familiarity with the others. There is a lot of mathematics and we long ago passed the point where any single person could "know" all of it. So, in the *key* subjects, your strategy 3 may be the best. In particular, in those courses and for those books you should be able in theory to solve any of the problems given. There are, however, a few books that are exceptional in that they deliberately hide research problems in the "exercises". Those books generally advertise the fact, or sometimes mark the questions that are, at time of printing, unsolved. At the other extreme, your strategy 1 is likely to ever be successful if you actually want to learn anything. Reading is a passive activity. The exercises are there to get you engaged with the material and that is where learning happens. But the other two strategies are probably enough for those courses that aren't key building blocks in your math psyche. Learn enough so that if it becomes necessary in future to go deeper you have the basic foundation in place on which to build. But there is one key caveat. Mathematics can be subtle. You may think you have a solution, but don't. So, as with other learning you need a way to get feedback on what you produce. Perhaps the professor is willing to give you that (and hopefully will, at least on the assigned problems). Another way is to form a study group among students to examine and reflect on one another's work. And don't forget that reading a solution to an exercise is worth very little in actual learning. Again, it is a passive activity that is unlikely to form deep associations that lead to insight. So, back to strategy 3 for the important stuff. Upvotes: 0 <issue_comment>username_3: Unfortunately, there is a tendency for the pedagogy to get worse as the topic gets harder. Probably because less people are learning the material (less investment in methods). Also more of a sink/swim Darwinian attitude. However, the fundamentals of good pedagogy remain. If you are finding the material too difficult, you need to come up with methods to create progression (easier problems first) and assist yourself. My advice: A. Do a combination of 3&4. Spend a reasonable amount of time and then go get assistance (since the solution manuals don't exist). If they do, use the guides. B. Try to use the minimum help first (again, within reason in terms of time management). I.e. See if just the answer is enough (not a full solution). Just look at the first few lines of a solution. Ask for a hint. Etc. The reason for doing this is that you are still learning the material, via some struggle. This helps it stay in your mind. If you just look at the solution and say "Oh...I get it", than you won't remember or benefit. Even if you get the crucial insight, you can still internalize it by working the entire problem. C. Similarly to (C), after getting any assistance (limited or full solution), DO THE WHOLE PROBLEM over again. Act as if you haven't gotten the help and just pretend you are solving the problem from scratch (write it all out, too). This sounds hokey but it actually works. Helps to engrain patterns and remembering into your brain. Remember learning and pedagogy are issues of practical human psychology. We are NOT silicon. We are flesh, with imperfect memories and processors. As Aristotle said, man learns by imitation and practice. Of course if you can nuke it out, best. But you can still create brain grooves, by "imitation and practice". D. Look for easier texts and easier homeworks. And then actually work those problems. It will give you confidence and help groove some things into your brain, that can then bubble up and help you on the harder problems. Also, if you request assistance, it's nice to have some basis to demonstrate. E. Don't be proud. Use the TA. Use the instructor. Yes, they probably want to concentrate on research. Yes, they may prefer the students that are brilliant and don't need help. But if you are respectful and give evidence of working hard, you can win them over. F. If they are "dicks" and you can't win them over, still don't sweat it. Keep getting help--be persistent. Don't react to brushoffs by anger OR by giving up. (Or by complaining commentish "questions" on this site.) Just smile and keep doing what matters to you--getting the help and learning the material. Upvotes: 2 <issue_comment>username_4: Treat exercises as optional adventures, not as boss monsters you need to beat before you can get to the next level. Use your own interests and common sense in deciding how hard you want to attack any specific exercise. You can still return to previously unsolved exercises from the next chapter. Every once in a while, you'll encounter an exercise that is used later in a proof. That's the only case where you strictly need to solve it, and thus spending extra time, getting help etc. are good ideas. Note that such exercises are usually among the easier ones, unless the author is doing their writing wrong. My strategy is generally to keep track of what I have solved by putting checkmarks near solved exercises and sometimes scribbling the main ideas of the solutions in. I don't do this with proper books, but I don't hesitate to do it with printed-out chapters, so I have several books or sets of lecture notes printed out and stapled together chapter by chapter, with lots of marginalia documenting my way through the text. (A sufficiently heavy-duty stapler -- say, able to staple 50 sheets -- plus some good tape for the "spine" of your stapled brochures, and some extra paper to use as cover are sufficient to make such printouts no less usable than books, but with the additional affordance that I can scribble around in them without feeling like a book-mutilating barbarian.) Upvotes: 2 <issue_comment>username_5: The nature of grad-level "textbooks/monographs" varies wildly, so there's no single answer that can apply to all. In the best case, which is not at all universally attained, the "exercises" are iconic examples illustrating the theorems of the chapter, *with* examples already occurring in the chapter. Great! But/and then there's not much work to be done, after reflecting on things enough to see that, yes, truly, these examples are "more of the same". In one sort of "worst case", the chapter gives definitions, and all the theorems of interest are given as exercises. This is horrible and ridiculous. A more typical middle-ground scenario is where the exercises do mention iconic examples, but with quite inadequate preparation in the preceding chapter, etc. And in almost all cases, to take the book literally, everything is lined-up. The seldom-noted point is that many things become vastly clearer with hindsight. So sitting on an exercise until solved is possibly the most foolish strategy... if, at least, it's a *good* exercise, oop, and if it's not (tho' how to judge, with insufficient info?) why do it at all? Conceiving as mathematics as "a school subject", with orderly development, exams, homework, clearly specified prerequisites, etc., makes it much harder to understand, in fact. EDIT: I forgot to say... that, really, why need there be "exercises"? Are there exercises at the end of chapters in novels? At the end of movements of symphonies? In coffee shops or bars? Math-as-filter has enormously corrupted the sense of what math is, and how to understand it. Upvotes: 2 <issue_comment>username_6: TL;DR: Relax! Mathematics takes time! I thought I would provide my perspective as a current pure mathematics student who is fairly successful at his studies and work. I read and collect a lot of mathematics textbooks in my spare time and I too have similar issues when learning new mathematics. In my earlier years as a mathematics student, I would always rush through reading textbooks and get flustered when I couldn't solve a problem quickly or understand the material. The moment I realised that it takes time to solve problems (often weeks or months) and it takes much longer to read, 'digest' and understand mathematics then other subjects, was the moment when my mathematics ability took a big leep forward. When reading a mathematics textbook I will usually 'read' it 3 times over. Here what I usually aim to do on each read: * On the first attempt, I read through the material very casually, not concerning myself too much on whether I understand every little detail or not. Here, I just want to try and pick up the definitions and the main concepts involved with the topic. I will usually try and remember how the concepts are developed and the results that are developed along the way if possible (e.g. lemmas, theorems etc.). I don't usually concern myself with the details of proofs too much. I might attempt some exercises if I think I can do them, however, sometimes I can't even do any at this point. * On the second read through, I usually try and fill in the gaps of my knowledge from the first attempt. Here I will try and understand the details of proofs more which is much easier now that I have a bit more of a broad overview of the topic and have a bit more of a 'feel' for how things work. I usually try and get through as many exercises as I can on the second read but if there are exercises that I can't figure out within a reasonable time frame, I will just mark them out and come back at a later date when I have better skills in the subject. * On the third read I don't really do much reading. Here I will just skim over all the content and mainly concentrate on understanding the proofs I still couldn't understand on the second read through and fill in any more details I couldn't quite work out before. Usually by this point I am fairly proficient in the subject and can figure out most of the exercises and try and complete some of the ones I marked out earlier that I couldn't do. Of course there are exercises I can't often do, and this just the nature of mathematics, sometimes you will just need to give it more time (possibly months or years) for you to develop the problem-solving skills required to answers these questions. Reading and learning mathematics takes *a lot*, I repeat *a lot* of time to master. It might take me 6-12 months before I am on my final read-through of a textbook, and again, that's just the nature of mathematics. It takes time for concepts to sink in, for your skills to develop, and it takes even longer and longer as you progress to higher levels of mathematics. Of course you should seek out help, either online (e.g. math stackexchange) or in person if you think it will benefit your learning, however, the take home message from my answer that I wanted to provide is that learning mathematics takes time. Don't overly concern yourself if you can't do problems or don't understand everything on your first attempt at reading through a textbook, and as has been suggested in other answers, try and stay away from seeking the solutions to problems you can't do. Keep reviewing and coming back as your skills develop over time. Patience and persistence are keys to succeeding at mathematics. Upvotes: 1
2019/10/06
432
1,866
<issue_start>username_0: This is more of a thought experiment than a problem. I just want to know how ethical is it. Suppose a student discovered a very cool theorem/result that would yield a very good academic paper in a very good journal. But the student know that the admission to the graduate program in his country is based only on a written test (which they are expected to ace), so publishing now will not be beneficial. So, is it ethical for him to save the paper for the rainy day in the future by publishing it under a pseudonym (hence the world will not miss out on the knowledge)? A generalized version of this problem would if you are overproductive in a certain year, then is it ethical to anonymize the production above the bare minimum and use it in the future when you are going through a lean phase?<issue_comment>username_1: I see no ethical issues here, you can do what you like when you like with your ideas. The only issues are practical. No "very good journal" would publish an anonymous or pseudonymous paper such as this. They require a real corresponding author who literally puts their name to the work and who can answer comments, criticisms, etc. Also, I don't see how publishing it without your name will bring you any future benefits. Upvotes: 1 <issue_comment>username_2: Pseudonymous publication, while unusual, is not particularly problematic from an ethical perspective. A famous example is [<NAME>](https://en.wikipedia.org/wiki/Nicolas_Bourbaki), a highly influential mathematician who never existed. Anonymous publication is more problematic, because then there is nobody taking responsibility for a work. The practical value of your proposal, however, seems dubious at best. You cannot time-shift a publication by claiming it later: all that you would be doing is adding the publication to your past. Upvotes: 3
2019/10/07
421
1,775
<issue_start>username_0: I understand the standard margin spacing on personal statements is 1 inch, but I've read that 0.75 inches may be okay. Most of the statements specify 1-2 page limits and nothing else. I'm wondering if 0.75 inch margins are okay, given that I have 3 significant research experiences to describe in my statements. Any opinions on costs VS. benefits in my case: **to lose content** (it would have to be from my research experiences, as I already have the bare minimum writing that addresses other aspects of the statement, such as why I want to attend the school or long-term career objectives) or to **possibly annoy one of the readers that I have 0.75 margins as opposed to 1 inch margins**. Thanks!<issue_comment>username_1: I see no ethical issues here, you can do what you like when you like with your ideas. The only issues are practical. No "very good journal" would publish an anonymous or pseudonymous paper such as this. They require a real corresponding author who literally puts their name to the work and who can answer comments, criticisms, etc. Also, I don't see how publishing it without your name will bring you any future benefits. Upvotes: 1 <issue_comment>username_2: Pseudonymous publication, while unusual, is not particularly problematic from an ethical perspective. A famous example is [Nicolas Bourbaki](https://en.wikipedia.org/wiki/Nicolas_Bourbaki), a highly influential mathematician who never existed. Anonymous publication is more problematic, because then there is nobody taking responsibility for a work. The practical value of your proposal, however, seems dubious at best. You cannot time-shift a publication by claiming it later: all that you would be doing is adding the publication to your past. Upvotes: 3
2019/10/07
2,023
8,646
<issue_start>username_0: First, she never helped me on either my thesis or my slide presentation. After my presentation (which she did not attend), she asked me "how to present" it to her so she could use it in the international conference. (I don't want to do that, but I have no choice.) Then, she asked me to send the whole presentation. I sent it to her as a PDF file. Later, she asked me for the "power point presentation" with the script. I really don't want to give it to her. What should I do? P.S. She never gives anybody credit. (She did this before with my senior.) **Update:** After reading many comments, I realize that I might be over-reacting. In academic area, it is normal that advisor could use the entire presentation of their student in the conference. (If I am lucky, she will acknowledge me) I feel so bad about it but I need to accept the reality.... Thank you for all comments.<issue_comment>username_1: As an advisor, I regularly use my students’ slides when I present my current projects. This is usually done within the context of high level presentations: I’m working on important project X; Alice and I worked on X.a which resulted in such and such, and with Bob on X.b which resulted in so and so. Claire and I are working with Alice to extend to X.c. If your advisor is supportive and showcases your work, she’s increasing its visibility and helping your career. To conclude, presenting students’ work is not necessarily a bad thing and can help them a lot. What is more concerning is that you seem to have serious trust issues with your advisor. She may be passing off her students’ work as her own but I honestly think that this is either a misunderstanding or something else. Advisors normally *want* to show that their students are doing well, not that they’re being totally shepherded by the advisor. This reflects *badly* on the advisor which is why I think it’s unusual. If things have gotten to the point where you’re not harboring any goodwill to her, I suggest you rethink your options. If there’s a chance of a conversation to rebuild trust, try and have one. Upvotes: 6 <issue_comment>username_2: It's pretty common in my experience for advisors to present their students' work, **with acknowledgement of the students' contributions**. They'll often combine slides from several students' presentations into one talk for a conference, but they can also present just one student's work. In that case they usually say something like "The work I am going to talk about today was all/mostly done by my student, <NAME>", at the start of the presentation. **Prepare a version of the slides specifically for your advisor to present,** with her as the presenter, and with whatever acknowledgement of your authorship you feel is appropriate. This could be as simple as the first slide having you as the first author and your advisor as the last author, with your advisor's name somehow highlighted to indicate that she is the speaker. Or it could be having your name and picture featured prominently on an acknowledgements slide at the end of the presentation, along with any other group members who contributed. Or it could be your name in the corner of all the important figure slides, to show you did that work in particular, if the slides are going into a longer presentation. Then you can send your advisor a nice pre-made presentation, and she won't have to do any extra work to cite you, because it will have already been done. It also communicates what form of acknowledgement you feel is appropriate. On the other hand, if you think you and your advisor have very different ideas about how much or what form of credit is appropriate in the presentation for your contribution to the work, **you need to have a talk with your advisor about it**. Upvotes: 4 <issue_comment>username_3: One benefit of her giving your presentation is that she will be actively promoting your work. For example, my adviser presented my theoretical work 3 times at 3 different conferences, and found an experimental collaborator to show that my theories were correct. The more exposure your research gets, the higher the possibility for citations, which then leads to better career opportunities. Upvotes: 3 <issue_comment>username_4: Look, if she had done the same to the senior, there's something afoot. Personally, I am uncomfortable with the length she is forcing you to go for her - like you've said, she wasn't present when you defended your thesis, and now, when there's an international acclaim in the game, she wants to have your work to present. She is your advisor, which means she is not your mentor. There's an important difference between the two. Mentor is usually accredited professor. Advisor is usually a senior student, usually in a capacity of helping professor manage his workload, but rarely they are professor themselves. You haven't cleared out if she is professor or not. If she is professor, then no, you got no recourse available for you, at least not at this point. But if she is a fellow student or just a person who specifically holds the title of 'advisor', and if there's a precedent - a bad one - then you can complain to the board of professors, but be aware you will have to have really good reasons and proofs at hand if things go south. In academic sphere that kind of thing can quickly go kaboom and influence your career later on in your life, never mind if you were right and she wrong. So if you want to go that route, then save the emails, all the conversations, if you were talking over phone, write the dates down, etcetera. If it's an advanced level of research, what she is doing without asking for your input, it's an academic suicide at its finest - for her. I find it strange she wants to present a topic she is (presumably) not well-versed in and you had to additionally practically dumb it down for her because she cannot be bothered to take some of her precious time and do her own note taking and research. Well, fine, she can present your thesis, but what about the questions that come after the presentation? If she doesn't understand the material and its nuances, she will trip all over herself because she won't know the particulars well enough to explain this or that facet of the experiments of wording or something else. It would've been better if you were the one to go and present the entire thing, because you know it inside and out. Upvotes: -1 <issue_comment>username_5: From your comments, you seem to be worried not only about not getting credit, which others have already addressed, but also about the originality of the presentation itself. [Here](https://academia.stackexchange.com/questions/138151/advisor-asked-for-my-entire-slide-presentation-so-she-could-give-the-presentatio#comment367430_138182) you write: > > To be honest, I will feel much better If she create the presentation by herself.... What I feel uncomfortable is "she will present my entire presentation with my own script." > > > Here's something important to keep in mind: this being academia, *the value is in the research itself*, not the presentation. That's not to say that the presentation isn't important; quite the opposite: you need a good presentation is to present the research in its best light, ensure that both the it and its relevance are properly understood, and so forth. But remember, you do not get academic credit for the good presentation, you get it for the research that was presented. In other words, a good presentation adds no extra *academic* value to the research: poor research with a good presentation remains poor research. However, a bad presentation *detracts* from the value of good research and may delay or prevent its worth from being fully recognised. Thus, if you have a good presentation that's a good thing, but its only real value to you as an academic is to help keep your research from being misunderstood or going unrecognised. Therefore you should greatly prefer that your adviser (or anybody else), when they present your research, uses your presentation if that's the best one available so that your research is seen in the best possible light. Further, you should give them any help necessary to improve the presentation further or focus it for their particular audience and situation, including giving it to them in the best format for modification and helping them make changes. (In case it's not clear; this is all completely separate from the credit issue; if your advisor isn't giving your credit you should deal with that as suggested in the other answers.) Upvotes: 5 [selected_answer]
2019/10/07
2,096
9,013
<issue_start>username_0: This is going to be long as I believe that the context is important. I have always been fairly good at maths and physics (won state wide prizes in the math contests and even won a gold medal in the national physics olympiad.) I live in USA/UK/AUS (trying to hide my identity). However, after going through grade 12 I got a spot in medicine and happily went along with it because most of my friends were high achievers and I thought why not? I got a fair bit of recognition for getting into medicine (I admit I enjoyed the ego stroking) but luckily medical school is a graduate course (M.D), so I still had undergrad to go. I initially enrolled as a double major - math and physics but within the first week switched to a pre-med track as I thought that it would ease the transition into medicine. Here's where things take a turn for the worse. By the end of second year I absolutely hate medicine and I realise that I truly do love maths and physics. I had luckily taken a couple of the first year mathematics modules - multivariate calculus and intro to linear algebra and two 2nd year course as well - intro to DEs and intro to probability. So I talk to my course office and beg them to let me transfer into a mathematics track, but I am only able to do courses where I have the prerequisites satisfied and hence now (end of 3rd year) - have done, in addition to the courses mentioned above, courses on: nonlinear dynamics, intro to stochastic processes, a modelling course and a course on systems of coupled dynamical systems. However, my true passion had always been physics and not only have I not done any meaningful university physics, but also haven't learnt a lot of mathematics - analysis, algebra and geometry. I am thinking of going on to do a masters and try as hard as possible to steer myself back onto the path of physics and mathematics, but I haven't done a lot of the necessary coursework. Furthermore, my family is low income (another factor pushing me into medicine (no parental pressure) but the thought of a good income was tempting) and so I don't think I can afford to take a couple years to essentially redo a bachelor's degree. My question is: am I screwed? I am more than willing, and passionate, to learn these subjects by myself - through books. In fact that's how I've learnt most of the things I have to date. But I realise that universities want to see another university giving a student their stamp of approval saying - this student has successfully learnt (insert subject.) I truly believe most of the time these stamps are meaningless, but in the case of admissions it truly is everything. Please help, any advice is much appreciated.<issue_comment>username_1: 1. Maybe you could head for something a little in between like chemical physics or biophysics or chem engineering where some of the courses to date are useful? 2. My advice is to push for getting your 4 year degree (assuming US) on time. Just see bad things when people delay too much from the switching. If you can switch majors while still getting done on time, fine. Otherwise, do some masters or the like. Perhaps after working. 3. You have the wrong idea about the math needed for physics. Analysis (as opposed to "calculus"), algebra (of the abstract sort), and advanced geometry are all really pretty marginal for someone working on undergrad 'zoics. Read up on Feynman or just talk to physicists (not math types). You need to be very strong at calculus, high school algebra, and ODEs/PDES. That's probably your biggest gap right now--diffyQs. NOT the more theoretical math courses that people talk about on the Internetz. 4. Getting through a "math methods" book would be good. If you learn this book from start to finish, that will put you in great stead: [https://www.amazon.com/gp/product/B005G14K86/ref=dbs\_a\_def\_rwt\_hsch\_vapi\_tkin\_p1\_i0](https://rads.stackoverflow.com/amzn/click/com/B005G14K86) (get a used earlier editions, don't spend $100, I have the 5th.) Many physicists like <NAME>as (little easier but similar). The traditional math methods for physics is Arfken Weber, but while it has a few harder topics, it's really a miserable grabbag and doesn't teach well. I like Kryezing better for self study.) 5. Finally, I would caution you on the income aspect of physics (or math). Unless you are a real superstar (which can be rare), you will face huge competition for academic jobs. These forums are littered with people struggling to find jobs or unhappy with their advisor situation (and the relative subservience required in Ph.D. relationship). Getting a certified job like a physician is a great move financially and in terms of societal prestige. Upvotes: 1 <issue_comment>username_2: Starting Point ============== First, I would advise someone in your situation to work the puzzle backward for different cases. Switch Degrees -------------- Take the course requirements for the physics and math undergraduate degrees. Take your current transcript. Start backward from the senior year of the physics or math degree. Develop the sequence of courses that you need. Track backward until you hit the overlap between what you need and what you have. Fill in the blanks with the courses that are pre-requisites to a physics or math degree that you do not have. At that point, take a course schedule from the university where you are. Design your schedule going forward to switch to either physics or math. Determine the minimum additional time you would need to complete either degree. As best possible, include that you will take courses during a summer semester to make up what you lack. At the end, you will have a complete schedule for your current degree, supposedly to be complete in one more year, as well as complete schedules for switching to either physics or math, to be completed in Np or Nm additional years. Go on to an MS Degree --------------------- Research the course offerings for an MS in physics or math. Pay particular attention to the senior-level undergraduate pre-requisite requirements for the first year courses. List their equivalents from your current university. By example, a graduate course in solid state physics will likely require math through partial differential equations as an explicit or implicit pre-requisite. Some graduate programs do the favor of listing a set of required transition courses for students who enter the graduate program from an undergraduate program that is not the same. When in doubt, find the equivalent graduate course at your university, find out who teaches that course, and go have a talk with that instructor to get the information you need. At the end you will have a list of senior-level undergraduate courses that you must have for an MS degree in either physics or math. Presume that you continue with your current degree to complete it in one year. Now repeat the exercise to track how long in time you will need just to take make-up courses in senior-level prerequisites. Comparative Analysis -------------------- * Based on what you have learned, which approach do you prefer? Suppose that all paths above require you to take one additional year beyond your current undergraduate degree. Which track is more appealing in its course list? Suppose all paths require the same or nearly the same types of courses. Which track is more appealing because it takes (or appears to take) less time? * Based on what you have learned, are any of the paths eliminated? Perhaps you cannot afford to take Np or Nm more years/semesters of courses. Perhaps you do not want to take partial differential equations ever at all. You can do this comparative analysis for any other degree program (chemistry, biomedical engineering, biophysical sciences, ...). It will give you a grounding in what you face as a minimum and what you are unwilling to do as a rule. Other Insights ============== I would not advise anyone in your situation who decides to pursue a graduate degree in a different program than their undergraduate degree to believe with any seriousness that they can simply make up courses that they lack just by reading on their own. You will be in competition with graduate students who know the undergraduate material backwards and forwards through rigorous training. You must have that a comparable level of confidence going in to a different graduate degree or you will more likely fail in your first year. The reason that graduate programs require student applicants to validate certain pre-requisite information through official coursework rather than through a "read on your own" approach is to avoid that you will otherwise most likely fail in your first year courses. The best advice you can get to help you make plans to continue to graduate school will come from faculty who are in the program where you want to move, who teach the first year graduate courses, and who advise students in research. Find them at your university. Go talk to them. Upvotes: 0
2019/10/07
672
2,845
<issue_start>username_0: I have finished my PhD a while ago and have since worked at the university as lecturer. Now the contract is coming to an end and I am weighing two options: (1) Teach at the university (I am much more interested in teaching than in research) or (2) work in industry. I have done a lot of teaching during and after my PhD and consider myself to be quite good at it. However, finding such a *teaching* position where I live seems very difficult at the moment. Abroad, those positions seem to be more plentiful. If it is of any matter: I live in Germany. The field is mathematics. As I am not exactly keen on leaving the country, I would like to try working in industry. However, I worry that I might miss teaching, and wonder if, after having left academia for a year or two, it would be difficult to return. Some related questions have been asked, but I fear that my lack of interest in research might give things a different spin... **Would leaving academia now be held against me if I tried to return a few years later to teach at a (possibly foreign) university?**<issue_comment>username_1: While it depends on where the possibly foreign university is, I can answer for the UK (...for now, pre Brexit. Post-Brexit will just be some sort of crazy nightmare so who can say). Simply put, yes, if you were applying for a teaching-only position, having a stretch of time in industry and outside of teaching is likely to be seen as negative on balance. With numerous qualified people applying for every position, committees can pick and choose and they usually pick and choose people who have fresh teaching experience rather than someone who has been out of the game for a while. It isn't impossible to come back to teaching from industry, especially if you can show how you working in industry would be an asset to your teaching, but it's hard. Indeed I only know of examples of a move back to teaching from very few fields and it would be unlikely in mine (Social Sciencey and Artsy). Upvotes: 2 <issue_comment>username_2: In Finland and at least some other Nordic countries there exist universities of applied sciences (ammattikorkeakoulu/høgskole/yrkehögskola/erhvervsakademi), and possibly such institutions that have changed into or combined with universities. Germany seems to have a similar institution. These offer less academic bachelor degrees and maybe higher degrees. They need teachers, who might or might not have some possibility or requirement for research or R&D. At least in Finland they often require the teachers to have relevant work experience, unless they are teaching general subjects (like mathematics); but I doubt relevant work experience would be a bad thing there, either. The teaching would often happen in the local language and might require a local teaching qualification. Upvotes: 1
2019/10/07
6,056
25,648
<issue_start>username_0: About 17 years ago I attended a top 10 UK university to study for a degree in Computer Science. Mathematics had never been a particularly strong point for me. However I (just) had the required A-Level qualification to be accepted for the course without any issue. After two terms I dropped out and switched to another top 10 UK University to study for an Information Technology degree. The reasons I dropped out from the former were simply because I found the Mathematical lectures unbelievably difficult. The university did not (in my opinion) provide good support to people who weren't strong in this area. However I also never understood why this level of mathematics was being taught in the first place. I recently looked at some lecture notes provided on the former universities website for the current year. And sure enough, the level of complexity seems the same. When I switched to doing an Information Technology degree part of my logic was that the outcome would be a more practical/useful set of skills to actually develop software, along with the logical thinking required (which in my view requires little mathematical knowledge). For me this has worked well as I've had a good career as a software developer since graduating. I've never found any of my work requires much maths, beyond that of a GCSE/A-level level of complexity. Interestingly, looking on LinkedIn, a huge number of people at the former university went on to be software engineers or similar roles. The salaries at the organisations these people work seem commensurate to the role I am currently in. Given this, I'm wondering what the end-goal and purpose of teaching that complex maths is on CS degrees? I understand some people will go into roles working with hardware, or even producing software where there are complex mathematical elements. But this seems to be in the minority - by a very big margin - in terms of what people actually end up doing. I have also spoken to people about careers they've gone into as opposed to just looking on LinkedIn etc. It seems to me that CS courses are teaching skills which - whilst relevant - are not as relevant as they might once have been. If this is the case then why has nobody addressed it? It seems absurd. My experience of this is based on two top 10 UK universities but having looked at some others (in the UK and USA) this seems to be a general case. If people are going into roles which require that level of mathematical knowledge, what are those roles? Because I can't see a lot of evidence of this actually happening after people have graduated.<issue_comment>username_1: Oxford University’s overview of their CS degree says it all: > > Computer Science is about understanding computer systems and networks at a deep level. Computers and the programs they run are among the most complex products ever created; designing and using them effectively presents immense challenges. Facing these challenges is the aim of Computer Science as a practical discipline, and this leads to some fundamental questions: > > > 1. How can we capture in a precise way what we want a computer system to do? > 2. Can we mathematically prove that a computer system does what we want it to? > 3. How can computers help us to model and investigate complex systems like the Earth’s climate, financial systems or our own bodies? > 4. What are the limits to computing? Will quantum computers extend those limits? > > > In other words, the language of computer science is math, not C++. If you were looking for vocational training in computers then CS Is probably an inappropriate choice. Upvotes: 7 <issue_comment>username_2: Well, you were in a computer **science** department and not in a computer **engineering** department. It would be a reasonable expectation that you would understand the fundamental mechanics of "computer stuff" and possibly continue your studies in research. I am not a computer scientist so examples may be limited here. * cryptography: Beyond understanding RSA schemes there are many interesting research areas and applications. For example [elliptic curve cryptography](https://en.wikipedia.org/wiki/Elliptic-curve_cryptography) uses serious level of mathematics or [homomorphic encryption](https://en.wikipedia.org/wiki/Homomorphic_encryption) uses encryption schemes imitating a mathematical concept of [morphism](https://en.wikipedia.org/wiki/Morphism) to process encrypted data without decryption (for example ordering numbers) * communications theory: [Coding theory](https://en.wikipedia.org/wiki/Coding_theory) is heavily mathematical. It has some subfields that require more than a basic understanding of linear algebra. I have heard that research in [error correcting codes](https://en.wikipedia.org/wiki/Forward_error_correction) is quite non-trivial. Their main property is to detect and correct possible errors in communication. This is, likely, extra helpful for applications where communications are more likely to have errors. Consider space, polar or deep ocean exploration. On the theoretical side of coding theory, famously, someone used [algebraic geometry](https://en.wikipedia.org/wiki/Algebraic_geometry) to show a theoretical upper bound for an invariant (don't remember its name) was as best as we could hope. (This would mean there are codes that give this exact value for the invariant as the upper bound predicts) * Image recognition and machine learning also uses serious levels of linear algebra. Serious in the sense that the intuition of 3 dimensional vector spaces and ability to multiply some matrices would not be sufficient as you would be comparing vector spaces of *big number* dimensions. I am sure a more literate person would be more helpful on this point. * [Haskell](https://www.haskell.org/) is a programming language based on a language mathematicians call [category theory](https://en.wikipedia.org/wiki/Category_theory). I do not know benefits of using this Haskell but some people seem to love it. I would say however that category theory is very nontrivial. I would say an average student after completing an undergraduate degree in mathematics would have a very basic understanding of it. It is highly conceptual and its origins and most of its examples are usually graduate school material. Hence it would be really helpful to have a general mathematics background in order to relate what is going on. Upvotes: 6 <issue_comment>username_3: There are two aspects of this and one of them is usually forgotten. The usual reason is that some parts of CS are dependent on knowing mathematics and how to use it. The [answer of username_2](https://academia.stackexchange.com/a/138169/75368) mentions some of them. But not all of CS is like that and those working, say, in Human Factors or UI development probably use the math they learned much less than those studying algorithms or encryption. But the other aspect is also important. The study of CS is enhanced from knowing the way in which mathematicians think and work - the mathematical way of thinking - not just from having facts at your fingertips. Mathematicians tend to be analytical and precise, depending on clear statements and logical demonstration. This way of looking at problems and stating solutions is of use to a computer scientist. But there are a lot of other things that are also important in CS, so a broad education is valued, not just a math background. After all, many of us try to solve problems for people, not just for others in our own field. So, while mathematics is often useful in helping to develop the *how* of some solution, it is less useful in knowing *why* some program should or should not be developed. --- *Good mathematicians* are also very creative, though that quality is widely shared with people of other fields. But becoming good in mathematics takes some work. Both depth and breadth are needed. Upvotes: 4 <issue_comment>username_4: I made this mistake when choosing my major in college. Computer science is not really about computers, in the same way that math classes aren't really about using calculators or pencils and paper. Modern computers are just a tool used to make computing (the true focus of computer science) easier and faster. This gets confusing because the things we compute would be incredibly time-consuming, or at least incredibly tedious, to do manually, so we almost always fall back on programming computers to do it for us. This means that you probably will do some programming (maybe a lot) for a CS degree, but this programming won't necessarily prepare you to create and deliver high-quality software. My degree focused far more on studying models of computation and algorithms than on how to produce software. This is still helpful in software development, as knowing efficient algorithms for various problems is good when you're constrained by time or memory capacity. However, it does mean that a CS degree will not necessarily include training for software development, as that is not the primary focus. Upvotes: 5 <issue_comment>username_5: Here in Germany, the field "Computer Science" is called "Informatik", which, according to the [etymology of the term "computer science"](https://en.wikipedia.org/wiki/Computer_science#Etymology), is either a contraction of the words "information" and "automatic", or of "information" and "mathematics"... --- As others have already pointed out, there are many *direct* connections between computer science and mathematics, on different levels: * Linear algebra is important for many forms of modern machine learning: Neural networks are essentially just large matrices - or conversely, a [machine learning system is just a large pile of linear algebra](https://xkcd.com/1838/) ;-). Another (maybe obvious) field is that of 3D computer graphics: All the special effects in movies are just a bunch of triangles and the answer to the question of what happens when light hits a surface * Calculus is essential for complexity theory (which analyzes the running time of algorithms), numerical analysis (which is required for estimating the error of approximations), and many other topics * General (or "abstract") algebra is about *structures* and *rules* (or operations) within these structures. There is a strong (and in my opinion, severely underrated) connection of this to *Object-oriented programming*. * Logic is an important basis for what you might call "low-level" programming, even though the connection between knowing that you can safely turn an `if (!(a || b))` into an `if (!a && !b)` and formal propositional logic may not be obvious. Of course, far beyond that, there are even logic-based programming languages like [Prolog](https://en.wikipedia.org/wiki/Prolog). * ... There are many more *direct* connections, meaning that you come in touch with a certain branch of mathematics when you *apply* computer science in practice. But there are also *indirect* connections: Mathematics is a language for good descriptions. Mathematics teaches a form of clarity, rigorousness, and preciseness that is *necessary* in order to manage the complex IT systems that we are dealing with nowadays. What may be perceived as "nitpicking" elsewhere is crucial in order to make sure that these systems operate in the way that we expect them to operate. When you have ever written something like a software specification, and missed a corner case, then you know: People will find that corner case. And they will hate you for missing it... --- However, from a practical point, I totally agree: The things that most people with a degree in computer science nowadays have to do in their jobs are totally unrelated to mathematics (and also totally unrelated to programming, for that matter). And it's somehow a pity to imagine that many students frustratedly drop out of their university courses (which they *might* have entered with wrong expectations about the subject) due to their bad math grades. These people could otherwise have been great at what they *actually* had to do in their jobs. --- My view here may be a bit narrow, because I only know the situation in Germany - even though I observed the developments in this area for >20 years now. But you referred to the UK, so the following may still be relevant. I read this quite a while ago, and it somehow stuck in my mind. It's a quote from an [essay "On the fact that the Atlantic Ocean has two sides"](http://www.cs.utexas.edu/~EWD/transcriptions/EWD06xx/EWD611.html), by <NAME> (yes, [**this** Dijkstra](https://en.wikipedia.org/wiki/Dijkstra%27s_algorithm)) : > > The first series of machines —that of the singletons— was mainly developed in the USA shortly after the World War II, while a ruined continental Europe had neither the technology, nor the money, to start building computers: the only thing we could do was thinking about them. Therefore it is not surprising that many US Departments of Computer Science are offsprings of Departments of Electrical Engineering, whereas those in Europe started (later) from Departments of Mathematics (of which they are often still a part). This different heritage still colours the departments, and could provide an acceptable explanation that in the USA Computing Science is viewed more operationally than in Europe. > > > That could explain it, to *some* extent, at least... Upvotes: 5 <issue_comment>username_6: Because it's Computer SCIENCE, and pretty much all science depends on math. Sure, there are things you can do with computers that don't involve much math (if any), like (AFAIK) implementing something like StackExchange. And you're quite correct that a degree in Information Technology or something similar would probably qualify you for a lot of jobs, without the need to learn anything beyond basic arithmetic. OTOH, there are a lot of jobs that do involve applying (what might from your POV be) fairly complicated math. For instance, what I'm doing these days involves applying numerical solutions to a particular class of partial differential equations (<https://en.wikipedia.org/wiki/Eikonal_equation> ). Probably 90% of my career has involved similar levels of math. So it's all in what you want to do. Upvotes: 2 <issue_comment>username_7: Computer science involves efficient problem-solving process, which can be attained through applied mathematics courses. At least, that's what employers and recruiters look for. Most of the time, the algorithm frameworks are based on math logic. If you don't plan to use the CS degree as a designer, an engineer, or other application-approach, you don't need to employ as much math. Upvotes: 0 <issue_comment>username_8: **Computer science is the *study* of computers and the underlying theory. Software development is the *use* of computers to... develop software.** This underlying theory is largely math... so by studying it, you are essentially studying (a small subsection of) math. Thus, there is a high proportion of math involved. **To your underlying question, however, academia is an often confusing and misunderstood place.** Only recently have people started realizing that a college degree is in many cases unnecessary, and in fact may be detrimental to your career path - in many fields besides just computer science. (i.e. 4+ years of tangentially related studying vs. 4+ years of job experience) For software development in particular, computer science is really only a tangentially related field. If CS 201 was indeed unnecessary for CS 305, it would not be a prerequisite. If it did not contain useful knowledge, indeed it would likely be quickly cut out of the curriculum, or relegated to an elective. It is certainly absurd to think that universities would teach useless things for an extended period of time. **Math is very useful in computer science - regularly less so in software development.** Upvotes: 3 <issue_comment>username_9: Because *computing* science (see below), and even computer programming, *is* applied mathematics. <NAME>, a mathematics educator who was once a professional programmer himself, [has said](http://www.math.kent.edu/~edd/PROCESSFUNC.pdf): > > A person's mathematical knowledge is her or his tendency to respond > to certain kinds of perceived problem situations by constructing, > reconstructing and organizing mental processes and objects to use in > dealing with the situations. > > > At a slightly less general level, consider what mathematics is. You choose or create a language in which you can express certain ideas and then do symbolic manipulation according to a set of rules you've also chosen or created to come up with to create more valid statements in that language according to those rules. If you're not careful to do this correctly, you may come out with invalid statements. The results you come up with may have some sort of application in the "real world" (e.g., I can use the language and rules of "integers" to help keep track of what people owe me and I owe them) or may be just work to help you better understand how you can use the language and rules and how they can be helpful to you in further use of them. In many parts of mathematics we use particular symbols called "numbers" and have a large library of oft-shared rules and languages related to this, but there are other areas of mathematics that don't use numbers at all (e.g., category theory), or, though they can be applied to numbers, are not really about numbers *per se* (group theory, algebraic structures, many more). Even before you get into the study or use of particular algorithms and the like, writing a computer program is basically what I described above. Many of the "simplest" concepts in computer programming that we use every day, such as the idea of a function, are purely mathematical concepts. Now as you've seen, it's perfectly possible to attack real-world problems with these mathematical tools in a non-rigorous way and get useful results. Typically the results will not be truly correct (i.e., your programs will have bugs), but they will be "correct enough" to do the job. (For a well written program in industry, you may never even encounter the situations that would demonstrate that it's incorrect.) That's what the discipline of engineering is: getting results that work well enough in the real world at acceptable cost. But even when you're doing engineering, much of what you do works well only because someone has gone and done enough mathematical heavy lifting to give you concepts and tools that you can use to do this. You may not have a really good understanding of what a function or a relation is, but your programming language or database system works because somebody did figure those out. And the people who did that work are the computing scientists. All this has been known and seriously contemplated for a long time. I think it's particularly well demonstrated by a comment in Peter Landin's classic 1966 paper ["The Next 700 Programming Languages"][landin66]: > > The most important contribution of LISP was not in list processing > or storage allocation or notation, but in the logical properties > lying behind the notation. here ISWIM makes little improvement > because, except for a few minor details, LISP left none to make. > There are two equivalant ways of stating these properties. > > > (a) LISP simplified the equivalence relations that determine the > extent to which pieces of a program can be interchanged without > affecting the outcome. > > > (b) LISP brought the class of entities that are denoted by > expressions a programmer can write nearer to those that arise in > models of physical systems and in mathematical and logical systems. > > > If you understand this (which probably requires some at least intuitive understanding of the of the lambda calculus or similar), you probably realize that a lot of the problems we deal with today are still the same basically mathematical problems that were being investigated back in the '60s when we were first seriously investigating what a "programming language" really is and means. **On Working Programs** One can also look at this from the more narrow viewpoint of, "I just want to write a program and make sure it works." Even here this becomes math if you take as a constraint "I really do want to, as best I can, make sure it works." Dijkstra's [EWD303](https://www.cs.utexas.edu/users/EWD/transcriptions/EWD03xx/EWD303.html), "On the Reliability of Programs," makes this argument in detail. His summary: > > Reliability concerns force us to restrict ourselves to > intellectually manageable programs. This faces us with the questions > "But how do we manage complex structure intellectually? What mental > aids do we have, what patterns of thought are efficient? What are > the intrinsic limitations of the human mind that we had better > respect?" Without knowledge and experience, such questions would be > very hard to answer, but luckily enough, our culture harbours with a > tradition of centuries an intellectual discipline whose main purpose > it is to apply efficient structuring to otherwise intellectually > unmanageable complexity. This discipline is called "Mathematics". If > we take the existence of the impressive body of Mathematics as the > experimental evidence for the opinion that for the human mind the > mathematical method is, indeed, the most effective way to come to > grips with complexity, we have no choice any longer: we should > reshape our field of programming in such a way that their methods of > understanding become equally applicable, for there are no other > means. > > > **On "Computing Science" versus "Computer Science"** Some amongst us, including the [University of Alberta](https://www.ualberta.ca/computing-science/about-the-department/department-history), find the more common name of the discipline slightly misleading and instead prefer to call it *Computing* Science. As <NAME> said in ["Computing Science at the University of Alberta, 1957 - 1993"](https://web.archive.org/web/20151019055830/webdocs.cs.ualberta.ca/~smillie/CompSci/DeptHist1993rev.pdf): > > The choice of the name "computing science" instead of the more > common "computer science" was deliberate in order to indicate that > computing rather than computers was to be the foundation of the > discipline. > > > Thinking about what we are wrangling with as "computing" rather than "computers" way may help you remember that all the software running the world today is much more dependent on the mathematical tools we use to be able to effectively and accurately model our problems and the world than on the hardware on which it runs. Upvotes: 2 <issue_comment>username_10: Here in Poland there is a lot of math specialist, but very little IT specialists available at universities (due to horrid salary difference I guess). So you end up having way too many mathematicians who need something to do - and bam, you just over-burden the IT studies with math, raw, unprocessed math, without showing you connections and uses in computer sciences. And after 5 years of university you get students who had 5 years of analysis, 2 semesters of geometry, another semesters of statistics and can barely program in c++, Ada and some java or python if they were lucky. Upvotes: 2 <issue_comment>username_11: All of these answers fail to describe something essential: Most jobs with writing code are doing the equivalent of **fabrication**, not *engineering*, and certainly not *science*. If this doesn’t make immediate sense, it may help to understand the equivalent when working with classical materials. A scientists would study metallurgy and how to make new alloys. In engineering, one would evaluate how large of a girder can be made from the material, or the limits of wear and tear in scenarios. Fabricators would receive the material in the form of pipes, which they assemble to fit the means of things like a kitchen, a bathroom, or maybe a whole house. A technician, like someone that works HVAC or automotive, would take pre-constructed subsystems and fit them together with a bit of adjustment using fabrication. Most careers that involve code are doing fabrication, or technician work. Increasingly, software jobs are technician roles. The jobs require continual awareness of new libraries and frameworks, and how to ensure their ease of assembly and configuration. But that’s not what computer science schools are out there to teach. **You don’t go to oxford, or any other Ivy League university to learn how to be a fabricator**. If you went to a school like that, and learned you don’t have the appetite or ability for science..... that’s the dice. > > The same thing goes for Fine Arts schools with concept studio core curricula. > > > It doesn’t mean that programs teaching legitimate science should do less of that. Upvotes: 1 <issue_comment>username_12: I think the key thing that is missing here is the broader point. In the majority of cases, University degrees do not provide **training**, they provide **education**. Thus a university degree teaches, in general, not how to do a particular job, but how to think, analyse and assess knowledge. The vast majority of all students who study for a degree will not go on to a job that directly uses the knowledge taught in a degree. Students with one of our degrees might be more employable as a side benefit, but it is not the primary goal of most degrees. (there are exceptions to this like medicine or law). Upvotes: 2
2019/10/07
5,706
24,012
<issue_start>username_0: I am applying for graduate programs this year. My undergraduate major is physics while my intended specialty is astrophysics. I apply for physics program for the most of schools and astronomy program for the others. One of the main factors which led me to apply for physics program rather than astronomy was the fact that there are much less student openings in astronomy graduate schools, due to its small size of department in general. I am thinking of 4 recommenders in my mind, and three of them is faculty in astronomy (while two out of the three are theoretical astrophysicists) I wonder how much the affiliation of recommenders would matter in the consideration of LORs. i.e. Do you think it is `inappropriate' to apply for physics graduate school with three letters from astronomy faculty? (This concern recently came up with since I read the requirement of one physics program that at least two of the LORs should be from professor *in physics*.)<issue_comment>username_1: Oxford University’s overview of their CS degree says it all: > > Computer Science is about understanding computer systems and networks at a deep level. Computers and the programs they run are among the most complex products ever created; designing and using them effectively presents immense challenges. Facing these challenges is the aim of Computer Science as a practical discipline, and this leads to some fundamental questions: > > > 1. How can we capture in a precise way what we want a computer system to do? > 2. Can we mathematically prove that a computer system does what we want it to? > 3. How can computers help us to model and investigate complex systems like the Earth’s climate, financial systems or our own bodies? > 4. What are the limits to computing? Will quantum computers extend those limits? > > > In other words, the language of computer science is math, not C++. If you were looking for vocational training in computers then CS Is probably an inappropriate choice. Upvotes: 7 <issue_comment>username_2: Well, you were in a computer **science** department and not in a computer **engineering** department. It would be a reasonable expectation that you would understand the fundamental mechanics of "computer stuff" and possibly continue your studies in research. I am not a computer scientist so examples may be limited here. * cryptography: Beyond understanding RSA schemes there are many interesting research areas and applications. For example [elliptic curve cryptography](https://en.wikipedia.org/wiki/Elliptic-curve_cryptography) uses serious level of mathematics or [homomorphic encryption](https://en.wikipedia.org/wiki/Homomorphic_encryption) uses encryption schemes imitating a mathematical concept of [morphism](https://en.wikipedia.org/wiki/Morphism) to process encrypted data without decryption (for example ordering numbers) * communications theory: [Coding theory](https://en.wikipedia.org/wiki/Coding_theory) is heavily mathematical. It has some subfields that require more than a basic understanding of linear algebra. I have heard that research in [error correcting codes](https://en.wikipedia.org/wiki/Forward_error_correction) is quite non-trivial. Their main property is to detect and correct possible errors in communication. This is, likely, extra helpful for applications where communications are more likely to have errors. Consider space, polar or deep ocean exploration. On the theoretical side of coding theory, famously, someone used [algebraic geometry](https://en.wikipedia.org/wiki/Algebraic_geometry) to show a theoretical upper bound for an invariant (don't remember its name) was as best as we could hope. (This would mean there are codes that give this exact value for the invariant as the upper bound predicts) * Image recognition and machine learning also uses serious levels of linear algebra. Serious in the sense that the intuition of 3 dimensional vector spaces and ability to multiply some matrices would not be sufficient as you would be comparing vector spaces of *big number* dimensions. I am sure a more literate person would be more helpful on this point. * [Haskell](https://www.haskell.org/) is a programming language based on a language mathematicians call [category theory](https://en.wikipedia.org/wiki/Category_theory). I do not know benefits of using this Haskell but some people seem to love it. I would say however that category theory is very nontrivial. I would say an average student after completing an undergraduate degree in mathematics would have a very basic understanding of it. It is highly conceptual and its origins and most of its examples are usually graduate school material. Hence it would be really helpful to have a general mathematics background in order to relate what is going on. Upvotes: 6 <issue_comment>username_3: There are two aspects of this and one of them is usually forgotten. The usual reason is that some parts of CS are dependent on knowing mathematics and how to use it. The [answer of username_2](https://academia.stackexchange.com/a/138169/75368) mentions some of them. But not all of CS is like that and those working, say, in Human Factors or UI development probably use the math they learned much less than those studying algorithms or encryption. But the other aspect is also important. The study of CS is enhanced from knowing the way in which mathematicians think and work - the mathematical way of thinking - not just from having facts at your fingertips. Mathematicians tend to be analytical and precise, depending on clear statements and logical demonstration. This way of looking at problems and stating solutions is of use to a computer scientist. But there are a lot of other things that are also important in CS, so a broad education is valued, not just a math background. After all, many of us try to solve problems for people, not just for others in our own field. So, while mathematics is often useful in helping to develop the *how* of some solution, it is less useful in knowing *why* some program should or should not be developed. --- *Good mathematicians* are also very creative, though that quality is widely shared with people of other fields. But becoming good in mathematics takes some work. Both depth and breadth are needed. Upvotes: 4 <issue_comment>username_4: I made this mistake when choosing my major in college. Computer science is not really about computers, in the same way that math classes aren't really about using calculators or pencils and paper. Modern computers are just a tool used to make computing (the true focus of computer science) easier and faster. This gets confusing because the things we compute would be incredibly time-consuming, or at least incredibly tedious, to do manually, so we almost always fall back on programming computers to do it for us. This means that you probably will do some programming (maybe a lot) for a CS degree, but this programming won't necessarily prepare you to create and deliver high-quality software. My degree focused far more on studying models of computation and algorithms than on how to produce software. This is still helpful in software development, as knowing efficient algorithms for various problems is good when you're constrained by time or memory capacity. However, it does mean that a CS degree will not necessarily include training for software development, as that is not the primary focus. Upvotes: 5 <issue_comment>username_5: Here in Germany, the field "Computer Science" is called "Informatik", which, according to the [etymology of the term "computer science"](https://en.wikipedia.org/wiki/Computer_science#Etymology), is either a contraction of the words "information" and "automatic", or of "information" and "mathematics"... --- As others have already pointed out, there are many *direct* connections between computer science and mathematics, on different levels: * Linear algebra is important for many forms of modern machine learning: Neural networks are essentially just large matrices - or conversely, a [machine learning system is just a large pile of linear algebra](https://xkcd.com/1838/) ;-). Another (maybe obvious) field is that of 3D computer graphics: All the special effects in movies are just a bunch of triangles and the answer to the question of what happens when light hits a surface * Calculus is essential for complexity theory (which analyzes the running time of algorithms), numerical analysis (which is required for estimating the error of approximations), and many other topics * General (or "abstract") algebra is about *structures* and *rules* (or operations) within these structures. There is a strong (and in my opinion, severely underrated) connection of this to *Object-oriented programming*. * Logic is an important basis for what you might call "low-level" programming, even though the connection between knowing that you can safely turn an `if (!(a || b))` into an `if (!a && !b)` and formal propositional logic may not be obvious. Of course, far beyond that, there are even logic-based programming languages like [Prolog](https://en.wikipedia.org/wiki/Prolog). * ... There are many more *direct* connections, meaning that you come in touch with a certain branch of mathematics when you *apply* computer science in practice. But there are also *indirect* connections: Mathematics is a language for good descriptions. Mathematics teaches a form of clarity, rigorousness, and preciseness that is *necessary* in order to manage the complex IT systems that we are dealing with nowadays. What may be perceived as "nitpicking" elsewhere is crucial in order to make sure that these systems operate in the way that we expect them to operate. When you have ever written something like a software specification, and missed a corner case, then you know: People will find that corner case. And they will hate you for missing it... --- However, from a practical point, I totally agree: The things that most people with a degree in computer science nowadays have to do in their jobs are totally unrelated to mathematics (and also totally unrelated to programming, for that matter). And it's somehow a pity to imagine that many students frustratedly drop out of their university courses (which they *might* have entered with wrong expectations about the subject) due to their bad math grades. These people could otherwise have been great at what they *actually* had to do in their jobs. --- My view here may be a bit narrow, because I only know the situation in Germany - even though I observed the developments in this area for >20 years now. But you referred to the UK, so the following may still be relevant. I read this quite a while ago, and it somehow stuck in my mind. It's a quote from an [essay "On the fact that the Atlantic Ocean has two sides"](http://www.cs.utexas.edu/~EWD/transcriptions/EWD06xx/EWD611.html), by <NAME> (yes, [**this** Dijkstra](https://en.wikipedia.org/wiki/Dijkstra%27s_algorithm)) : > > The first series of machines —that of the singletons— was mainly developed in the USA shortly after the World War II, while a ruined continental Europe had neither the technology, nor the money, to start building computers: the only thing we could do was thinking about them. Therefore it is not surprising that many US Departments of Computer Science are offsprings of Departments of Electrical Engineering, whereas those in Europe started (later) from Departments of Mathematics (of which they are often still a part). This different heritage still colours the departments, and could provide an acceptable explanation that in the USA Computing Science is viewed more operationally than in Europe. > > > That could explain it, to *some* extent, at least... Upvotes: 5 <issue_comment>username_6: Because it's Computer SCIENCE, and pretty much all science depends on math. Sure, there are things you can do with computers that don't involve much math (if any), like (AFAIK) implementing something like StackExchange. And you're quite correct that a degree in Information Technology or something similar would probably qualify you for a lot of jobs, without the need to learn anything beyond basic arithmetic. OTOH, there are a lot of jobs that do involve applying (what might from your POV be) fairly complicated math. For instance, what I'm doing these days involves applying numerical solutions to a particular class of partial differential equations (<https://en.wikipedia.org/wiki/Eikonal_equation> ). Probably 90% of my career has involved similar levels of math. So it's all in what you want to do. Upvotes: 2 <issue_comment>username_7: Computer science involves efficient problem-solving process, which can be attained through applied mathematics courses. At least, that's what employers and recruiters look for. Most of the time, the algorithm frameworks are based on math logic. If you don't plan to use the CS degree as a designer, an engineer, or other application-approach, you don't need to employ as much math. Upvotes: 0 <issue_comment>username_8: **Computer science is the *study* of computers and the underlying theory. Software development is the *use* of computers to... develop software.** This underlying theory is largely math... so by studying it, you are essentially studying (a small subsection of) math. Thus, there is a high proportion of math involved. **To your underlying question, however, academia is an often confusing and misunderstood place.** Only recently have people started realizing that a college degree is in many cases unnecessary, and in fact may be detrimental to your career path - in many fields besides just computer science. (i.e. 4+ years of tangentially related studying vs. 4+ years of job experience) For software development in particular, computer science is really only a tangentially related field. If CS 201 was indeed unnecessary for CS 305, it would not be a prerequisite. If it did not contain useful knowledge, indeed it would likely be quickly cut out of the curriculum, or relegated to an elective. It is certainly absurd to think that universities would teach useless things for an extended period of time. **Math is very useful in computer science - regularly less so in software development.** Upvotes: 3 <issue_comment>username_9: Because *computing* science (see below), and even computer programming, *is* applied mathematics. <NAME>, a mathematics educator who was once a professional programmer himself, [has said](http://www.math.kent.edu/~edd/PROCESSFUNC.pdf): > > A person's mathematical knowledge is her or his tendency to respond > to certain kinds of perceived problem situations by constructing, > reconstructing and organizing mental processes and objects to use in > dealing with the situations. > > > At a slightly less general level, consider what mathematics is. You choose or create a language in which you can express certain ideas and then do symbolic manipulation according to a set of rules you've also chosen or created to come up with to create more valid statements in that language according to those rules. If you're not careful to do this correctly, you may come out with invalid statements. The results you come up with may have some sort of application in the "real world" (e.g., I can use the language and rules of "integers" to help keep track of what people owe me and I owe them) or may be just work to help you better understand how you can use the language and rules and how they can be helpful to you in further use of them. In many parts of mathematics we use particular symbols called "numbers" and have a large library of oft-shared rules and languages related to this, but there are other areas of mathematics that don't use numbers at all (e.g., category theory), or, though they can be applied to numbers, are not really about numbers *per se* (group theory, algebraic structures, many more). Even before you get into the study or use of particular algorithms and the like, writing a computer program is basically what I described above. Many of the "simplest" concepts in computer programming that we use every day, such as the idea of a function, are purely mathematical concepts. Now as you've seen, it's perfectly possible to attack real-world problems with these mathematical tools in a non-rigorous way and get useful results. Typically the results will not be truly correct (i.e., your programs will have bugs), but they will be "correct enough" to do the job. (For a well written program in industry, you may never even encounter the situations that would demonstrate that it's incorrect.) That's what the discipline of engineering is: getting results that work well enough in the real world at acceptable cost. But even when you're doing engineering, much of what you do works well only because someone has gone and done enough mathematical heavy lifting to give you concepts and tools that you can use to do this. You may not have a really good understanding of what a function or a relation is, but your programming language or database system works because somebody did figure those out. And the people who did that work are the computing scientists. All this has been known and seriously contemplated for a long time. I think it's particularly well demonstrated by a comment in Peter Landin's classic 1966 paper ["The Next 700 Programming Languages"][landin66]: > > The most important contribution of LISP was not in list processing > or storage allocation or notation, but in the logical properties > lying behind the notation. here ISWIM makes little improvement > because, except for a few minor details, LISP left none to make. > There are two equivalant ways of stating these properties. > > > (a) LISP simplified the equivalence relations that determine the > extent to which pieces of a program can be interchanged without > affecting the outcome. > > > (b) LISP brought the class of entities that are denoted by > expressions a programmer can write nearer to those that arise in > models of physical systems and in mathematical and logical systems. > > > If you understand this (which probably requires some at least intuitive understanding of the of the lambda calculus or similar), you probably realize that a lot of the problems we deal with today are still the same basically mathematical problems that were being investigated back in the '60s when we were first seriously investigating what a "programming language" really is and means. **On Working Programs** One can also look at this from the more narrow viewpoint of, "I just want to write a program and make sure it works." Even here this becomes math if you take as a constraint "I really do want to, as best I can, make sure it works." Dijkstra's [EWD303](https://www.cs.utexas.edu/users/EWD/transcriptions/EWD03xx/EWD303.html), "On the Reliability of Programs," makes this argument in detail. His summary: > > Reliability concerns force us to restrict ourselves to > intellectually manageable programs. This faces us with the questions > "But how do we manage complex structure intellectually? What mental > aids do we have, what patterns of thought are efficient? What are > the intrinsic limitations of the human mind that we had better > respect?" Without knowledge and experience, such questions would be > very hard to answer, but luckily enough, our culture harbours with a > tradition of centuries an intellectual discipline whose main purpose > it is to apply efficient structuring to otherwise intellectually > unmanageable complexity. This discipline is called "Mathematics". If > we take the existence of the impressive body of Mathematics as the > experimental evidence for the opinion that for the human mind the > mathematical method is, indeed, the most effective way to come to > grips with complexity, we have no choice any longer: we should > reshape our field of programming in such a way that their methods of > understanding become equally applicable, for there are no other > means. > > > **On "Computing Science" versus "Computer Science"** Some amongst us, including the [University of Alberta](https://www.ualberta.ca/computing-science/about-the-department/department-history), find the more common name of the discipline slightly misleading and instead prefer to call it *Computing* Science. As <NAME> said in ["Computing Science at the University of Alberta, 1957 - 1993"](https://web.archive.org/web/20151019055830/webdocs.cs.ualberta.ca/~smillie/CompSci/DeptHist1993rev.pdf): > > The choice of the name "computing science" instead of the more > common "computer science" was deliberate in order to indicate that > computing rather than computers was to be the foundation of the > discipline. > > > Thinking about what we are wrangling with as "computing" rather than "computers" way may help you remember that all the software running the world today is much more dependent on the mathematical tools we use to be able to effectively and accurately model our problems and the world than on the hardware on which it runs. Upvotes: 2 <issue_comment>username_10: Here in Poland there is a lot of math specialist, but very little IT specialists available at universities (due to horrid salary difference I guess). So you end up having way too many mathematicians who need something to do - and bam, you just over-burden the IT studies with math, raw, unprocessed math, without showing you connections and uses in computer sciences. And after 5 years of university you get students who had 5 years of analysis, 2 semesters of geometry, another semesters of statistics and can barely program in c++, Ada and some java or python if they were lucky. Upvotes: 2 <issue_comment>username_11: All of these answers fail to describe something essential: Most jobs with writing code are doing the equivalent of **fabrication**, not *engineering*, and certainly not *science*. If this doesn’t make immediate sense, it may help to understand the equivalent when working with classical materials. A scientists would study metallurgy and how to make new alloys. In engineering, one would evaluate how large of a girder can be made from the material, or the limits of wear and tear in scenarios. Fabricators would receive the material in the form of pipes, which they assemble to fit the means of things like a kitchen, a bathroom, or maybe a whole house. A technician, like someone that works HVAC or automotive, would take pre-constructed subsystems and fit them together with a bit of adjustment using fabrication. Most careers that involve code are doing fabrication, or technician work. Increasingly, software jobs are technician roles. The jobs require continual awareness of new libraries and frameworks, and how to ensure their ease of assembly and configuration. But that’s not what computer science schools are out there to teach. **You don’t go to oxford, or any other Ivy League university to learn how to be a fabricator**. If you went to a school like that, and learned you don’t have the appetite or ability for science..... that’s the dice. > > The same thing goes for Fine Arts schools with concept studio core curricula. > > > It doesn’t mean that programs teaching legitimate science should do less of that. Upvotes: 1 <issue_comment>username_12: I think the key thing that is missing here is the broader point. In the majority of cases, University degrees do not provide **training**, they provide **education**. Thus a university degree teaches, in general, not how to do a particular job, but how to think, analyse and assess knowledge. The vast majority of all students who study for a degree will not go on to a job that directly uses the knowledge taught in a degree. Students with one of our degrees might be more employable as a side benefit, but it is not the primary goal of most degrees. (there are exceptions to this like medicine or law). Upvotes: 2
2019/10/07
5,796
24,384
<issue_start>username_0: I have read several questions regarding the importance of a personal website when applying to Posdocs and other academic jobs. However, I have found little stating the importance of it when applying for a PhD. In general, how important is it to have a website as a PhD applicant? In my case, I have done a bachelors and master in Chemical Engineering. However, now I am doing a master in Pure Mathematics and I am planning to apply for a PhD next year. From what I have read about admissions, the focus is on the SOP and the references, a website is hardly ever mentioned. These are the things I would like the admissions committee to know but might be too much detail for the SOP: 1. I feel it is important to explain why the change of fields. But I don't want that to cover much of the motivation letter. So I think of having a short post on my website explaining in detail how and why I changed fields. 2. I would like to show how invested I am in the field (differential geometry). So I was planning on including a section with the conferences and schools I have attended during my master, and to showcase a student seminar I have organized at my university. I do not know how to put this information in my PhD application. So I think a website might be a good solution. Is this a good idea? what would be best for me to do?<issue_comment>username_1: Oxford University’s overview of their CS degree says it all: > > Computer Science is about understanding computer systems and networks at a deep level. Computers and the programs they run are among the most complex products ever created; designing and using them effectively presents immense challenges. Facing these challenges is the aim of Computer Science as a practical discipline, and this leads to some fundamental questions: > > > 1. How can we capture in a precise way what we want a computer system to do? > 2. Can we mathematically prove that a computer system does what we want it to? > 3. How can computers help us to model and investigate complex systems like the Earth’s climate, financial systems or our own bodies? > 4. What are the limits to computing? Will quantum computers extend those limits? > > > In other words, the language of computer science is math, not C++. If you were looking for vocational training in computers then CS Is probably an inappropriate choice. Upvotes: 7 <issue_comment>username_2: Well, you were in a computer **science** department and not in a computer **engineering** department. It would be a reasonable expectation that you would understand the fundamental mechanics of "computer stuff" and possibly continue your studies in research. I am not a computer scientist so examples may be limited here. * cryptography: Beyond understanding RSA schemes there are many interesting research areas and applications. For example [elliptic curve cryptography](https://en.wikipedia.org/wiki/Elliptic-curve_cryptography) uses serious level of mathematics or [homomorphic encryption](https://en.wikipedia.org/wiki/Homomorphic_encryption) uses encryption schemes imitating a mathematical concept of [morphism](https://en.wikipedia.org/wiki/Morphism) to process encrypted data without decryption (for example ordering numbers) * communications theory: [Coding theory](https://en.wikipedia.org/wiki/Coding_theory) is heavily mathematical. It has some subfields that require more than a basic understanding of linear algebra. I have heard that research in [error correcting codes](https://en.wikipedia.org/wiki/Forward_error_correction) is quite non-trivial. Their main property is to detect and correct possible errors in communication. This is, likely, extra helpful for applications where communications are more likely to have errors. Consider space, polar or deep ocean exploration. On the theoretical side of coding theory, famously, someone used [algebraic geometry](https://en.wikipedia.org/wiki/Algebraic_geometry) to show a theoretical upper bound for an invariant (don't remember its name) was as best as we could hope. (This would mean there are codes that give this exact value for the invariant as the upper bound predicts) * Image recognition and machine learning also uses serious levels of linear algebra. Serious in the sense that the intuition of 3 dimensional vector spaces and ability to multiply some matrices would not be sufficient as you would be comparing vector spaces of *big number* dimensions. I am sure a more literate person would be more helpful on this point. * [Haskell](https://www.haskell.org/) is a programming language based on a language mathematicians call [category theory](https://en.wikipedia.org/wiki/Category_theory). I do not know benefits of using this Haskell but some people seem to love it. I would say however that category theory is very nontrivial. I would say an average student after completing an undergraduate degree in mathematics would have a very basic understanding of it. It is highly conceptual and its origins and most of its examples are usually graduate school material. Hence it would be really helpful to have a general mathematics background in order to relate what is going on. Upvotes: 6 <issue_comment>username_3: There are two aspects of this and one of them is usually forgotten. The usual reason is that some parts of CS are dependent on knowing mathematics and how to use it. The [answer of username_2](https://academia.stackexchange.com/a/138169/75368) mentions some of them. But not all of CS is like that and those working, say, in Human Factors or UI development probably use the math they learned much less than those studying algorithms or encryption. But the other aspect is also important. The study of CS is enhanced from knowing the way in which mathematicians think and work - the mathematical way of thinking - not just from having facts at your fingertips. Mathematicians tend to be analytical and precise, depending on clear statements and logical demonstration. This way of looking at problems and stating solutions is of use to a computer scientist. But there are a lot of other things that are also important in CS, so a broad education is valued, not just a math background. After all, many of us try to solve problems for people, not just for others in our own field. So, while mathematics is often useful in helping to develop the *how* of some solution, it is less useful in knowing *why* some program should or should not be developed. --- *Good mathematicians* are also very creative, though that quality is widely shared with people of other fields. But becoming good in mathematics takes some work. Both depth and breadth are needed. Upvotes: 4 <issue_comment>username_4: I made this mistake when choosing my major in college. Computer science is not really about computers, in the same way that math classes aren't really about using calculators or pencils and paper. Modern computers are just a tool used to make computing (the true focus of computer science) easier and faster. This gets confusing because the things we compute would be incredibly time-consuming, or at least incredibly tedious, to do manually, so we almost always fall back on programming computers to do it for us. This means that you probably will do some programming (maybe a lot) for a CS degree, but this programming won't necessarily prepare you to create and deliver high-quality software. My degree focused far more on studying models of computation and algorithms than on how to produce software. This is still helpful in software development, as knowing efficient algorithms for various problems is good when you're constrained by time or memory capacity. However, it does mean that a CS degree will not necessarily include training for software development, as that is not the primary focus. Upvotes: 5 <issue_comment>username_5: Here in Germany, the field "Computer Science" is called "Informatik", which, according to the [etymology of the term "computer science"](https://en.wikipedia.org/wiki/Computer_science#Etymology), is either a contraction of the words "information" and "automatic", or of "information" and "mathematics"... --- As others have already pointed out, there are many *direct* connections between computer science and mathematics, on different levels: * Linear algebra is important for many forms of modern machine learning: Neural networks are essentially just large matrices - or conversely, a [machine learning system is just a large pile of linear algebra](https://xkcd.com/1838/) ;-). Another (maybe obvious) field is that of 3D computer graphics: All the special effects in movies are just a bunch of triangles and the answer to the question of what happens when light hits a surface * Calculus is essential for complexity theory (which analyzes the running time of algorithms), numerical analysis (which is required for estimating the error of approximations), and many other topics * General (or "abstract") algebra is about *structures* and *rules* (or operations) within these structures. There is a strong (and in my opinion, severely underrated) connection of this to *Object-oriented programming*. * Logic is an important basis for what you might call "low-level" programming, even though the connection between knowing that you can safely turn an `if (!(a || b))` into an `if (!a && !b)` and formal propositional logic may not be obvious. Of course, far beyond that, there are even logic-based programming languages like [Prolog](https://en.wikipedia.org/wiki/Prolog). * ... There are many more *direct* connections, meaning that you come in touch with a certain branch of mathematics when you *apply* computer science in practice. But there are also *indirect* connections: Mathematics is a language for good descriptions. Mathematics teaches a form of clarity, rigorousness, and preciseness that is *necessary* in order to manage the complex IT systems that we are dealing with nowadays. What may be perceived as "nitpicking" elsewhere is crucial in order to make sure that these systems operate in the way that we expect them to operate. When you have ever written something like a software specification, and missed a corner case, then you know: People will find that corner case. And they will hate you for missing it... --- However, from a practical point, I totally agree: The things that most people with a degree in computer science nowadays have to do in their jobs are totally unrelated to mathematics (and also totally unrelated to programming, for that matter). And it's somehow a pity to imagine that many students frustratedly drop out of their university courses (which they *might* have entered with wrong expectations about the subject) due to their bad math grades. These people could otherwise have been great at what they *actually* had to do in their jobs. --- My view here may be a bit narrow, because I only know the situation in Germany - even though I observed the developments in this area for >20 years now. But you referred to the UK, so the following may still be relevant. I read this quite a while ago, and it somehow stuck in my mind. It's a quote from an [essay "On the fact that the Atlantic Ocean has two sides"](http://www.cs.utexas.edu/~EWD/transcriptions/EWD06xx/EWD611.html), by <NAME> (yes, [**this** Dijkstra](https://en.wikipedia.org/wiki/Dijkstra%27s_algorithm)) : > > The first series of machines —that of the singletons— was mainly developed in the USA shortly after the World War II, while a ruined continental Europe had neither the technology, nor the money, to start building computers: the only thing we could do was thinking about them. Therefore it is not surprising that many US Departments of Computer Science are offsprings of Departments of Electrical Engineering, whereas those in Europe started (later) from Departments of Mathematics (of which they are often still a part). This different heritage still colours the departments, and could provide an acceptable explanation that in the USA Computing Science is viewed more operationally than in Europe. > > > That could explain it, to *some* extent, at least... Upvotes: 5 <issue_comment>username_6: Because it's Computer SCIENCE, and pretty much all science depends on math. Sure, there are things you can do with computers that don't involve much math (if any), like (AFAIK) implementing something like StackExchange. And you're quite correct that a degree in Information Technology or something similar would probably qualify you for a lot of jobs, without the need to learn anything beyond basic arithmetic. OTOH, there are a lot of jobs that do involve applying (what might from your POV be) fairly complicated math. For instance, what I'm doing these days involves applying numerical solutions to a particular class of partial differential equations (<https://en.wikipedia.org/wiki/Eikonal_equation> ). Probably 90% of my career has involved similar levels of math. So it's all in what you want to do. Upvotes: 2 <issue_comment>username_7: Computer science involves efficient problem-solving process, which can be attained through applied mathematics courses. At least, that's what employers and recruiters look for. Most of the time, the algorithm frameworks are based on math logic. If you don't plan to use the CS degree as a designer, an engineer, or other application-approach, you don't need to employ as much math. Upvotes: 0 <issue_comment>username_8: **Computer science is the *study* of computers and the underlying theory. Software development is the *use* of computers to... develop software.** This underlying theory is largely math... so by studying it, you are essentially studying (a small subsection of) math. Thus, there is a high proportion of math involved. **To your underlying question, however, academia is an often confusing and misunderstood place.** Only recently have people started realizing that a college degree is in many cases unnecessary, and in fact may be detrimental to your career path - in many fields besides just computer science. (i.e. 4+ years of tangentially related studying vs. 4+ years of job experience) For software development in particular, computer science is really only a tangentially related field. If CS 201 was indeed unnecessary for CS 305, it would not be a prerequisite. If it did not contain useful knowledge, indeed it would likely be quickly cut out of the curriculum, or relegated to an elective. It is certainly absurd to think that universities would teach useless things for an extended period of time. **Math is very useful in computer science - regularly less so in software development.** Upvotes: 3 <issue_comment>username_9: Because *computing* science (see below), and even computer programming, *is* applied mathematics. <NAME>, a mathematics educator who was once a professional programmer himself, [has said](http://www.math.kent.edu/~edd/PROCESSFUNC.pdf): > > A person's mathematical knowledge is her or his tendency to respond > to certain kinds of perceived problem situations by constructing, > reconstructing and organizing mental processes and objects to use in > dealing with the situations. > > > At a slightly less general level, consider what mathematics is. You choose or create a language in which you can express certain ideas and then do symbolic manipulation according to a set of rules you've also chosen or created to come up with to create more valid statements in that language according to those rules. If you're not careful to do this correctly, you may come out with invalid statements. The results you come up with may have some sort of application in the "real world" (e.g., I can use the language and rules of "integers" to help keep track of what people owe me and I owe them) or may be just work to help you better understand how you can use the language and rules and how they can be helpful to you in further use of them. In many parts of mathematics we use particular symbols called "numbers" and have a large library of oft-shared rules and languages related to this, but there are other areas of mathematics that don't use numbers at all (e.g., category theory), or, though they can be applied to numbers, are not really about numbers *per se* (group theory, algebraic structures, many more). Even before you get into the study or use of particular algorithms and the like, writing a computer program is basically what I described above. Many of the "simplest" concepts in computer programming that we use every day, such as the idea of a function, are purely mathematical concepts. Now as you've seen, it's perfectly possible to attack real-world problems with these mathematical tools in a non-rigorous way and get useful results. Typically the results will not be truly correct (i.e., your programs will have bugs), but they will be "correct enough" to do the job. (For a well written program in industry, you may never even encounter the situations that would demonstrate that it's incorrect.) That's what the discipline of engineering is: getting results that work well enough in the real world at acceptable cost. But even when you're doing engineering, much of what you do works well only because someone has gone and done enough mathematical heavy lifting to give you concepts and tools that you can use to do this. You may not have a really good understanding of what a function or a relation is, but your programming language or database system works because somebody did figure those out. And the people who did that work are the computing scientists. All this has been known and seriously contemplated for a long time. I think it's particularly well demonstrated by a comment in Peter Landin's classic 1966 paper ["The Next 700 Programming Languages"][landin66]: > > The most important contribution of LISP was not in list processing > or storage allocation or notation, but in the logical properties > lying behind the notation. here ISWIM makes little improvement > because, except for a few minor details, LISP left none to make. > There are two equivalant ways of stating these properties. > > > (a) LISP simplified the equivalence relations that determine the > extent to which pieces of a program can be interchanged without > affecting the outcome. > > > (b) LISP brought the class of entities that are denoted by > expressions a programmer can write nearer to those that arise in > models of physical systems and in mathematical and logical systems. > > > If you understand this (which probably requires some at least intuitive understanding of the of the lambda calculus or similar), you probably realize that a lot of the problems we deal with today are still the same basically mathematical problems that were being investigated back in the '60s when we were first seriously investigating what a "programming language" really is and means. **On Working Programs** One can also look at this from the more narrow viewpoint of, "I just want to write a program and make sure it works." Even here this becomes math if you take as a constraint "I really do want to, as best I can, make sure it works." Dijkstra's [EWD303](https://www.cs.utexas.edu/users/EWD/transcriptions/EWD03xx/EWD303.html), "On the Reliability of Programs," makes this argument in detail. His summary: > > Reliability concerns force us to restrict ourselves to > intellectually manageable programs. This faces us with the questions > "But how do we manage complex structure intellectually? What mental > aids do we have, what patterns of thought are efficient? What are > the intrinsic limitations of the human mind that we had better > respect?" Without knowledge and experience, such questions would be > very hard to answer, but luckily enough, our culture harbours with a > tradition of centuries an intellectual discipline whose main purpose > it is to apply efficient structuring to otherwise intellectually > unmanageable complexity. This discipline is called "Mathematics". If > we take the existence of the impressive body of Mathematics as the > experimental evidence for the opinion that for the human mind the > mathematical method is, indeed, the most effective way to come to > grips with complexity, we have no choice any longer: we should > reshape our field of programming in such a way that their methods of > understanding become equally applicable, for there are no other > means. > > > **On "Computing Science" versus "Computer Science"** Some amongst us, including the [University of Alberta](https://www.ualberta.ca/computing-science/about-the-department/department-history), find the more common name of the discipline slightly misleading and instead prefer to call it *Computing* Science. As <NAME> said in ["Computing Science at the University of Alberta, 1957 - 1993"](https://web.archive.org/web/20151019055830/webdocs.cs.ualberta.ca/~smillie/CompSci/DeptHist1993rev.pdf): > > The choice of the name "computing science" instead of the more > common "computer science" was deliberate in order to indicate that > computing rather than computers was to be the foundation of the > discipline. > > > Thinking about what we are wrangling with as "computing" rather than "computers" way may help you remember that all the software running the world today is much more dependent on the mathematical tools we use to be able to effectively and accurately model our problems and the world than on the hardware on which it runs. Upvotes: 2 <issue_comment>username_10: Here in Poland there is a lot of math specialist, but very little IT specialists available at universities (due to horrid salary difference I guess). So you end up having way too many mathematicians who need something to do - and bam, you just over-burden the IT studies with math, raw, unprocessed math, without showing you connections and uses in computer sciences. And after 5 years of university you get students who had 5 years of analysis, 2 semesters of geometry, another semesters of statistics and can barely program in c++, Ada and some java or python if they were lucky. Upvotes: 2 <issue_comment>username_11: All of these answers fail to describe something essential: Most jobs with writing code are doing the equivalent of **fabrication**, not *engineering*, and certainly not *science*. If this doesn’t make immediate sense, it may help to understand the equivalent when working with classical materials. A scientists would study metallurgy and how to make new alloys. In engineering, one would evaluate how large of a girder can be made from the material, or the limits of wear and tear in scenarios. Fabricators would receive the material in the form of pipes, which they assemble to fit the means of things like a kitchen, a bathroom, or maybe a whole house. A technician, like someone that works HVAC or automotive, would take pre-constructed subsystems and fit them together with a bit of adjustment using fabrication. Most careers that involve code are doing fabrication, or technician work. Increasingly, software jobs are technician roles. The jobs require continual awareness of new libraries and frameworks, and how to ensure their ease of assembly and configuration. But that’s not what computer science schools are out there to teach. **You don’t go to oxford, or any other Ivy League university to learn how to be a fabricator**. If you went to a school like that, and learned you don’t have the appetite or ability for science..... that’s the dice. > > The same thing goes for Fine Arts schools with concept studio core curricula. > > > It doesn’t mean that programs teaching legitimate science should do less of that. Upvotes: 1 <issue_comment>username_12: I think the key thing that is missing here is the broader point. In the majority of cases, University degrees do not provide **training**, they provide **education**. Thus a university degree teaches, in general, not how to do a particular job, but how to think, analyse and assess knowledge. The vast majority of all students who study for a degree will not go on to a job that directly uses the knowledge taught in a degree. Students with one of our degrees might be more employable as a side benefit, but it is not the primary goal of most degrees. (there are exceptions to this like medicine or law). Upvotes: 2
2019/10/07
683
2,844
<issue_start>username_0: I'm soon going to apply to some M.Sc. in Europe and I asked my current supervisor at work - he was a professor until a couple of years ago - to write a reference letter for me. My function at the company is currently related to what he used to teach as a professor so I thought it would be a good idea to ask him. He accepted, but added that he is extremely busy as of now and suggested I write the letter myself, after which he will read it. My issue is I don't really know how to write a recommendation letter that doesn't sound like the student wrote it :). Any pointers?<issue_comment>username_1: First, I'll note that a number of people here think that this practice is unacceptable. I'm not that strong on it, though think your reservations are important. The supervisor didn't have you as a student, presumably, so can't speak to many of the qualities you will need for graduate study. All he has direct knowledge of is your performance at the job and your work ethic. That can be important, but isn't enough. Make sure that you also have letters from academics who knew you earlier. But, instead of writing a draft of a letter, you could just create a list of bullet points that include things that you want to remind the writer of. Things you did on the job. Contributions you made, some of which he may not be aware of. In this way you let the wording be his, but help assure that he doesn't forget the things you'd like included. It also eases his work load. But, it would be good to check with him on this before you finalize it. You could, in fact, take him a draft of such a list, complete or not, and ask if it is enough. Upvotes: 0 <issue_comment>username_2: As username_1 mentioned the issue of student writing a LoR is contraversial. You can read more on this webpage. Apart from ethics, I would like to point out the practicality. You will be writing a LoR for the first time and it is about you. You will probably overthink about what to say trying to be humble and not to undersell at the same time. You probably won't have a good sense of which of your qualifications should be pointed out with regards to field you are applying. Overall, you will be doing something (writing a lor), first time away, in a way it was never intended, under less than ideal situation. The result will likely be "meh". I can make few suggestions. * Ask if you can help your boss with his work so he will have some time for you. * Possibly find another recommendationç * Consider applying in a later date, giving your boss more time to work on a letter. (perhaps in the spring term). * You also can try to get a third person (possible colleague) write you a recommendation and let your boss edit it. It might bypass *some* of the issues. Not all. Ethics are still questionable. But it is an option. Upvotes: 1
2019/10/07
846
3,477
<issue_start>username_0: I am a PhD student in economics in the UK. Recently, I have been hearing how important teaching experience is to find a job in academia as a lecturer. One needs to submit teaching statements and may be asked about one's teaching philosophy. Can you please share your experience about how important teaching is in becoming a lecturer? I suppose being a teaching assistent and leading example classes is a big plus. **Of course**, only if the research part of the PhD is going sufficiently well such that there is time for teaching. What about schemes like the [TSP (The Scholars Programme) of the Brilliant Club](https://thebrilliantclub.org/the-brilliant-club-for-researchers/get-involved/) where PhD Students visit schools the odd times and give mini courses to interested pupils? Does this count as being committed to teaching or is it neglected due to the low level of difficulty?<issue_comment>username_1: For a lecturer (defined generically) it would be very important anywhere. Most of your competition for any teaching position will include many people with experience. Some will have a lot. It is good to get it wherever you can. I have no experience with the program you point to, but it looks promising. But there are other ways. I'm surprised that you aren't getting some of that in your doctoral program, but that depends on funding. One way to easily get a bit of experience that may be open to you is to ask a professor who also has undergraduate teaching duties if you could give a lecture or two on some topic in that other course. When I was an undergraduate, my professor actually asked each of the students (very small classes) to deliver a lecture on some topic. It didn't give him the day off, however, as he was there and gave us some feedback on how we did. The first try can be pretty miserable, actually. Especially for an introvert. You can develop a teaching philosophy of sorts by watching the professors you admire and giving some thought to why you think they are effective. What is it that they do, not just in the lecture hall, but overall, that makes you appreciate them. You can even ask them about it. But many new PhDs have a lot of misconceptions about teaching and learning. The biggest misconception, I think, is to believe that students are all like yourself. They aren't. And you need to adapt to that if you want to be effective. Effective lecturing, for example, is only a very small part of teaching. --- However, since the term lecturer (descriptive, generic) may be different from Lecturer (an entry level academic rank in UK) your mileage may vary. How important it is for hiring into a specific rank depends on the university hiring. For some, teaching would be very important. For others only research really counts. For some even research isn't enough unless you attract grant funding for it. Upvotes: 1 <issue_comment>username_2: Even if the PhD research does not go as planned, I would strongly suggest you to get some teaching experience for a number of reasons: 1. "One learns best when one teaches" as a translation of the German "Am besten lernt man, wenn man lehrt". Teaching gives you a deeper understanding of many topics, it helped me a lot. 2. Character building: you as a person will benefit as well from teaching. Speaking publicly is not an issue for me since I taught during my PhD studies, this had a really positive effect on my work outside academia as well. Upvotes: 2
2019/10/07
2,751
11,853
<issue_start>username_0: First author is a PhD student, as am I. All of the other co-authors are our supervisors. The first author doesn't want me to read the paper, because we are both PhD students and I might plagiarize (?!) her work. Can I report this person for unprofessional conduct to the University committee? Or how should I behave? More details/Update: My supervisors are having huge issues with this student also because now she doesn't want them as co-authors of other chapters of her PhD. As for me, the supervisors sent me a copy of the manuscript so that I could read it, but she discovered it and overreacted. She submitted that paper to a good journal but didn't keep me in the loop so I had no idea about the reviews until our supervisors told me (again against her will). Now, she doesn't want to acknowledge my contribution for another paper. Because of this latter my supervisors and I decided to collect evidence of her unprofessional conduct in the present and in the past.<issue_comment>username_1: Talk to your advisor. --------------------- Dealing with this sort of thing is what they’re there for. They’ll probably tell you that it’s fine, and maybe have a word with your colleague and/or said colleague’s advisor. In general, though, if you’re going to be putting your name on a paper, it’d be a good idea to have read it first - if it’s poorly written, it’ll tarnish your name as well as theirs, after all, and you might be able to see areas where you can improve it. Upvotes: 6 <issue_comment>username_2: My advice is to get a read of the paper or take your name off (and ask them not to use any of your work (tests, samples, etc.) NOTE: this does NOT mean setting yourself up as an editor or nitpicker or gatekeeper or the like. Just make sure the parts that are your work are accurate. As for the rest of the paper, just make sure that it is not so scientifically crazy you don't want your name on it (pretty low hurdle). Other than that, let the first author do her thing in terms of describing work, structure, wording, journal choice, etc. Obviously get your advisor involved. But if you don't see the paper, don't let your name be used (even if they are OK with that for them). Upvotes: 3 <issue_comment>username_3: There is a golden standard (codified in the [Vancouver Recommendations on authorship](http://www.icmje.org/recommendations/)) that every author individually vouches for the correctness of the entire paper. In other words, you can't ask a co-author to only read and write part of the paper, because they need the whole paper to vouch for its correctness. They wouldn't satisfy the criteria to be allowed to be authors. It also suggests a broken collaboration culture if there isn't even enough trust among co-authors to let each other read everything that's happening in a collaboration. But beyond that, the idea to not let the co-authors read parts of the paper just doesn't make any sense: A paper is intended to be *published*, at which point it becomes available to everyone, not just the co-authors. Upvotes: 7 <issue_comment>username_4: > > Can I report this person for unprofessional conduct to the University committee? > > > Whoa, there! That would be a huge escalation. Talk to the person concerned, first. If that doesn't work, talk to your advisor. If that doesn't work, consider going higher. But don't start with the nuclear option, ever. Upvotes: 6 <issue_comment>username_5: Some journals now require or permit an *authorship statement*, which appears online alongside the paper. Writing one (honestly of course) stating who did what would then make it harder for you to later claim work that you'd agreed in writing was someone else's. Whether this would satisfy a seemingly paranoid first author is another matter, but it may provide a route to a resolution, especially if driven by the supervisors. Encouraged by my examiners I included such a statement in a list of publications in my thesis for all papers on which I was an author, though clearly this wasn't signed off by the other authors of the papers. Upvotes: 3 <issue_comment>username_6: While I agree with the major advice in [username_2s's answer](https://academia.stackexchange.com/a/138192/4249) (you do not have to allow publishing a paper you have not seen), I feel as it could be more concise (and I do not agree that you should agree with publication of anything 'passing' a very low quality bar). I agree with you that your situation is highly unusual and not good academic practice. I understand that you have discussed with your situation with your advisers and they are generally supportive, however I do not understand **why you or your advisers feel like you have no power over this student or the publication?** You have power over every publication containing your work, data which you have not made public, results of your experiments or your analysis: **you have power to allow re-usage of your materials under your terms, and you have the power to demand proper attribution**. As others have correctly noted, you vouch for every publication you put your name on, and you **should not allow something you have not read to be published under your name**. While I agree that reporting this behaviour as unethical to the University committee is very much over the top, and would suggest escalating it is small steps, but know that you do have a *nuclear option* available to you if all else fails. I would recommend going through the next general steps: * Talk to your advisers (you seem to have already done that). Explain that you are happy to share your data and contribute with your analysis of the experiments/results in a collaborative effort to publish. Explain that you are not happy to share your data, experimental setup or results outside of a collaboration before you publish them. Get their support for your opinions. * Talk to the PhD student in question. Explain the same as above: you are happy to *collaborate*. As the data is not yet published, you can not share it with researchers you are not collaborating with. Explain that your potential contribution (data, experiments, analysis) is substantial and why you think it warrants an authorship. Then, explain that every paper a researcher vouches for every paper they author. Explain that a sloppily written paper, or even worse, an erroneous one, will potentially damage your academic career as well. Explain that if there are any interpretations or conclusions in the paper discussion, you need to be sure you agree with them and support them before you attach your name to those claims. Finally, say that you are **happy to collaborate on those terms, and those terms only**. Explain that you can not in good faith put your name on a paper which you have not seen, and do not give the permission this person to re-use your materials (data, experimental setup - those do not have to be attributed, but still need to be legally obtained). Hope that the other PhD student agrees to those terms. * Talk to the PhD student in question more formally, involving your and their advisers. Repeat everything from above. Make it clear that you do not give permission to reuse your data, but are happy to collaborate (on regular terms, where your work is attributed and you are able to approve of the manuscript and suggest changes before submission). If, at this point, the PhD student still does not agree, make it clear that using your data without permission, or worse, your work, analysis or conclusions without attribution, would be grounds to request retraction if the work got published. You have just given the PhD student three choices: proceed to work in a proper collaboration with you; proceed their work on their own, not relying on your data, results or input; proceed to submit their manuscript relying on your data and input, getting involved in unethical academic practices, which will give you grounds to request retraction of that paper and potentially damage their reputation. * (if the paper gets published but not under the above terms) You may now consider first going through the University channels (with the support of your advisers), but if it has come to this, you now have the power to start biting back. This could be either the situation where the paper got published without your name but still using the data you did not give the permission for, or with your name but without your prior knowledge. Say that your next steps are to contact the Editor in Chief of the journal where the paper was submitted, explain the situation and ask the paper be retracted. At this point, the University might mediate somewhat (e.g. allow the person to attempt to retract the paper themselves to save face?), or there might be not much they can do. * Finally, get in touch with the Editor in Chief of the journal in question. Explain the situation. Either that your data has been used without your permission, or that you never approved (or have seen) the manuscript on which you are a coauthor and do not approve of its submission and publication. **Provide some proof** (This will have required you to keep an e-mail trail of all the crucial points of this process. Especially the part where you explicitly tell the student that you will not share your data unless you enter a proper collaboration, as well as where you ask for access to the manuscript before the submission). Hopefully, it does not come to this, and you find an agreement through one of the earlier steps I propose, even if that agreement is potentially not to collaborate. Even more hopefully, the PhD student in question realises how to work with other people and changes her opinion. Be friendly, be nice and be open. Behave. But do all of that with the knowledge that you do have power over your own work, and be firm on exercising that power if needed Upvotes: 3 <issue_comment>username_7: Making my comment an answer : The only solutions are not either you behave or report the student. There are middle ground cases where you can have a conversation with everyone involved and convince them that letting you read the paper is the correct way forward. You can convince them, e.g. by making sure that all the information is shared via writing (email) and with other important people (supervisors/academics) cc'd, thus putting you in a situation where you would be caught cheating if you were trying to plagiarize it. Hopefully that would convince the first author that you do not plan to act maliciously and gives them proof to react in case you do. Upvotes: 1 <issue_comment>username_8: The academic staff need to take responsibility for appropriate behaviours. Yes, basic expectations should be simply part of the PhD program training. However, sounds like there may be a need for "coaching", or behavioural management for other staff involved. Academic staff should be reaching out to the University for support on this. At worst, there is a mental health issue here that similarly requires appropriate support. Bottom line: This is at root an HR issue, not one as to processes and procedures. Upvotes: 1 <issue_comment>username_9: This is not normal. Your lead author is displaying multiple behaviors associated with mental illness and/or an attempt to cover up malfeasance. There needs to be an intervention and she ought to be encouraged to seek counseling. Graduate school can be very stressful. People can crack. You and the other authors should act both to protect your interests and to get her some help, rather than pursue a punitive course, if at all possible. I did an intervention as an undergrad, completely different situation, but the end result was the person got help and no official punishment, although the officials were very helpful and assisted in said intervention. YMMV. Upvotes: 1
2019/10/08
1,115
4,861
<issue_start>username_0: I am looking into applying to a teaching position at a liberal-arts institution. The application page asks for an example of scholarly work. What are good examples of scholarly work? For the sciences, is this like published articles?<issue_comment>username_1: The best example is some sort of research. In many such institutions the best research is something that undergraduates can participate in. In some fields such as math this is easier, requiring less in the way of equipment. In others it may be impossible other than by, say taking a leave to go to CERN for particle physics. But the research should result in some sort of writing, even if not up to the standards of top journals. But at the other end of the scale is just "keeping up with the field" through reading and attendance (maybe with participation) at conferences. It can vary widely. In CS there are a number of regional conferences at which people participate. Most of the work done for these is about the teaching itself and how to do it effectively. They are very valuable as the field changes and grows. At a somewhat higher level is participation in an internet based study group that works on some set of problems in the field collaboratively and may produce occasional papers. But the collaboration itself is valuable for people at such colleges as it gives you access to a wider range of ideas that you can bring back to the classroom, which is, likely, your most important task. Running a study group on some topic with a couple of faculty and a few more advanced students is a good example. Read and discuss a few recent papers (or classic papers) and show the students how to approach learning about the arcana of the field. In some fields even non-scholarly writing may be "scholarly". If you teach writing, then writing novels will probably do. Becoming a popularizer of science and becoming known for it is usually recognized. Textbook writing - even workbook writing may be enough. In the best case, you get to set the terms yourself by proposing a course of development to the administration and then following it. Upvotes: -1 <issue_comment>username_2: For sciences, scholarly work means peer-reviewed publications. Conference papers count if they are peer reviewed. In other disciplines, it can mean different things. Upvotes: 3 <issue_comment>username_3: “Scholarly work” in academia generally refers to papers and books, with potentially other forms of formally released output (e.g., patents, or source code on a public repository) being included. In the sciences this expression is slightly quaint and not often used, but can be useful when one wants to speak not just about one’s published papers but about a broader body of work that includes other things. It’s also possible that some people would count other forms of written, but informal or less polished work (like a blog post, or your own highly prolific physics.se contributions), as “scholarly work”, but personally I wouldn’t, and generally I would be very careful about describing anything as scholarly work that I wasn’t sure the person I’m addressing would accept as such, particularly in a job application. (With that being said, your physics.se writing is really nice and says good things about you, so you should probably mention it somewhere, for example in your teaching statement). Upvotes: 4 [selected_answer]<issue_comment>username_4: I had a colleague (UK Lecturer in Computer Science) recently define scholarly work as academic work not involving any new ideas, but **new presentations and synthesis of established knowledge**. In particular, I believe they included: * (text)books * survey / overview / white papers on particular application domains, tools or research directions * (to a lesser extent) reviewing for journals I actually can't remember if they included things such as editorial duties for journals in this or not, but this short list should give a good idea on what kinds of things they considered under "scholarly work". This is very similar to [Dan Romnik's answer](https://academia.stackexchange.com/a/138199/4249) but with "standard" papers explicitly excluded. To elaborate a bit more, my colleague placed *scholarly work* as an activity (with the outputs as listed above) falling between *teaching* (where the outputs are graduate students\*) and *research* (where the outputs are peer-reviewed publications, newly developed technologies, etc.). \*I am not very fond of "graduate students" being called the output of teaching activities, however the general feeling one gets from UK University policies is that they are product and profit-oriented businesses (with tuitions as inputs and students with diplomas as an expected output), and less and less as charitable educational institutions which they are on paper. Upvotes: 2
2019/10/08
3,505
13,704
<issue_start>username_0: Hope this doesn't fall into the category of personal advice, but I just have a general question about academia today (particularly in Philosophy). I have read many articles on the internet indicating that it's extremely difficult, if not impossible, to find a tenure track position in a relatively highly ranked university (top 100 nationally, say) after completing a PhD in today's job market, even from a very highly ranked program. Is this true, and is it a bad idea to get a PhD if the only thing I'd be able to do with it is teach? Does the internet exaggerate how difficult the market is? Would I have to move across the country, potentially, to find a T.T. position somewhere? Thanks.<issue_comment>username_1: A PhD is a research degree; it’s a piece of paper that says “this person is a competent researcher”. If you want to go into a field where being a researcher is important (e.g. corporate R&D), then it’s worth your time to get one. If you’re not, it probably isn’t. Upvotes: 5 <issue_comment>username_2: You'll need to do your own research to answer most of these questions. > > Is it true that it's very difficult to find a tenure track position in a relatively highly ranked university (top 100 nationally, say) after completing a PhD in today's job market, even from a very highly ranked program? > > > I'm not familiar with philosophy, but in other fields: yes. ([Example](https://academia.stackexchange.com/questions/61845/finding-post-phd-employment-in-mathematics-how-difficult)) > > Would I have to move across the country, potentially, to find a T.T. position somewhere? > > > Again, not familiar with philosophy, but in other fields: yes. There's actually a good chance you'll not just have to move "across the country", but also "outside the country". [Example.](https://arxiv.org/ftp/arxiv/papers/0805/0805.2624.pdf) If you're unfamiliar with the job market in philosophy and can't find any good leads online, try asking philosophy professors at your local university. They can tell you more about what it takes to become one of them. If possible, talk to the graduate students in philosophy as well - they should have a more current perspective. > > Is it a bad idea to get a PhD if the only thing I'd be able to do with it is teach? > > > This is only something you can answer. Can you see yourself making a career as a teacher? If yes then this wouldn't be bad (as long as you're willing to take on the challenge of finding a job as a teacher). If no then you should avoid getting a PhD - it's not worth it. One more thing thing: whatever you choose, you're making a life-changing decision. Chances are, for most of your life, someone else (your parents/guardians) has been making these life-changing decisions for you. They can't do that forever, and neither can anyone else. At some point you become an adult and become responsible for your own well-being. Making these choices won't be easy, and it's often unclear what the consequences will be, but you're still the best-positioned person to make these choices. Do your research, sleep on it, make the decision, and deal with the consequences when they come. Upvotes: 4 <issue_comment>username_3: The 100 best universities or programs aren't the only places where research happens. In fact, doing research will be part of most jobs at universities, regardless of whether that university is in the top 100 or not. If you choose a career in academia you can expect to have an initial phase where you have to move from university to university to pursue postdoc positions and the like followed by a more stable phase. You will be unlikely to end up at a top university, but research will be part of your job, just as teaching and management duties. However, there is no guarantee that if you choose to pursue a career in academia that academia also chooses you. In particular, the number of open positions, and thus the degree of competition, tends to be quite cyclical; there are periods where it is easy to start a career and periods where it is very hard. Luckily the cycles don't always match up across countries. So many work around bad periods by moving abroad. Upvotes: 4 <issue_comment>username_4: You need to think about this extremely important question: **Why do you want a Ph.D.?** There are good reasons and bad reasons to pursue a Ph.D. You need to understand that a Ph.D. isn't a badge saying you're smart, but rather an apprenticeship towards becoming a professional researcher. Good reasons to get a Ph.D. include: * You genuinely love philosophy and want a career working with it * You want to become a professional researcher * You've looked into what a typical academic career looks like and decided it's the kind of life you want to live Bad reasons to get a Ph.D. include: * You want to prove you're very intelligent * You want the prestige that comes with an advanced degree * Everyone around you seems to be going into grad school * It just seems like the most obvious next step after undergrad These "bad reasons" aren't bad in the sense that you're a bad person if they're what you're thinking. They're bad in that getting a Ph.D. locks you into a very specific and frankly demanding career path. You don't want to be halfway into your Ph.D. before you realize that you don't really enjoy the work you're doing. You're very right that the job market for academic research positions is incredibly competitive. Your fears that you'll likely have to move to find a good job are right. Your expectations that you'll be able to get a job in a top 100 university, while possible, are going to be extremely difficult to achieve. Do you want to be a researcher so dearly that you're willing to accept those downsides? If so, then getting a Ph.D. is the right decision. If not, there are many other things you can do with your life that are just as worthwhile, just as honorable, and likely more along the lines of what you personally value. --- I decided myself that I didn't want a research career. After I finished my master's degree, I left academia for industry. I am very happy now and can't imagine how stressed out and miserable I would be if I were forcing myself to go through years of a Ph.D. just to be able to say I had done it. This was my personal decision. You need to make this decision for yourself. Upvotes: 4 <issue_comment>username_5: Yes, it's a bad decision to get a PhD in any humanities discipline if you value things like any degree of stability. Something like 50+% of humanities PhD's will never get a tenure-track professorship ("there's jobs in research" and "the skills and discipline you learn during your PhD program are 'transferable to other areas of industry'" propaganda you'll hear from your advisors and graying professors notwithstanding) -- which is the only job you're reasonably qualified for after most likely being six figures in debt and burning most of your younger years slaving away for minimum wage and trying to get published in obscure academic journals [which almost NO ONE ever reads -- ditto your thesis, which will most likely promptly do nothing but collect dust once completed]. Please take a few minutes to read the likes of William Pannapacker's two-part series: "Graduate School in the Humanities: Just Don't Go" before you decide to commit many years and thousands of dollars to something that could very well be the antithesis of an activity designed to produce long term satisfaction and fulfillment. It still amazes me after all of these years and the availability of information on the Internet that "career students" STILL commit themselves to years in a PhD program without doing even a minute of common-sense evaluation as to its long term career feasibility. Upvotes: 2 <issue_comment>username_6: There was an article called "should you go to West Point" by an old colonel. His short answer: "probably not". (The long answer was the measured, negative article.) So...short answer "yes, probably a bad idea". Better off getting an MBA. This is a little bit different if you are in a field (e.g. pharma) where large amount of Ph.D. hiring occurs. But corporate central research is under pressure as well. In some cases, you may find that things are not quite as bad. For example if you are (really truly) in that upper 10% (maybe even upper 5%) of grad students. Then you have the potential to win the tournament position. But otherwise...look at music or acting. Supply and demand reduces the earnings of all but the superstars to a low level. Of course if you're happy raising a family on a postdoc salary, fine. Some people are, (mostly immigrants, in the US). Also if you really have the goods...than fine...go for it. You don't quite need to be Albert Einstein, either. But you should have a pretty damned good feeling that you are better than almost every single classmate. Don't expect reasonable rewards if you are in the middle of the bell curve (let alone the left side). P.s. Moderators, don't delete this answer. OP needs to hear different views, not just the "it's gonna be OK" view. Edit: Just saw that you are going for a philosophy Ph.D. Yikes. Very little use of that in industry. I would be very concerned about the supply/demand picture. The one good thing is you don't have grants incentivizing masses of lab students (as cheap technicians). But you also will have a worse funding situation during your degree and almost no industrial positions to apply to. Just go research the numbers on graduates and job postings. Upvotes: 2 <issue_comment>username_7: You can approach this as a math problem. I'll demonstrate with some round numbers, modify the numbers if you have more realistic estimates. Say the average tenured career lasts 30 years (start at 35, retire at 65). Say the average department has 20 tenure track faculty. So in the top 100 you'll have 67 tenure track job openings per year. Random Google source says that approximately 500 doctorates are awarded in Philosophy every year. So the first year, you have 500 people applying for 67 positions. The second year, you have 500+433 = 933 people applying for 67 positions. And so on, until people start just giving up. Say the average person is willing to spend 5 years trying to get a tenure track position before they give up and do something else instead. So there will be 2232 people applying for 67 positions every year, or roughly a 3% chance that one person will get any position in one year. Or over five years, one person has a 15% chance of getting any position. Even if you are twice as likely to be hired in any year as average, your chances over 5 years are still only 25%. Upvotes: 1 <issue_comment>username_8: If you want to be an academic philosopher, then getting a Ph.D. a *good* idea because it is required. But it is no guarantee; you have to be prepared for a high probability of not ending up in academia at all. It is a gamble. Academic job searches are generally national or even international. Every metropolitan area has lots of openings for doctors, lawyers, accountants, and chefs, but only a handful (or fewer) for philosophers or theoretical physicists. If the thought of having to move across the country for a job is a problem for you, then maybe you don't want to be a philosopher badly enough to justify the time and effort of the Ph.D. On the other hand, the experience of getting the degree is not all bad. If you really like doing philosophy, then it's a chance to spend five or six years doing something interesting, in hopes of possibly making it a career. If that career doesn't work out, you are close to where you would have been if you were just getting out of college. This is all assuming you can get TA-ships and possibly (if you're lucky) fellowships to support you. If you have to pay tuition the whole time then it's really not worth it. Upvotes: 2 <issue_comment>username_9: This may border as an 'opinion'... I have built businesses since 1989 for Entrepreneurs and I found the following to be true: 1. There are no self-made men/women. 2. None of the 'super-rich' have college degrees (Asia to the U.S.A). 3. The 'super-rich/entrepreneurs' however do prefer to employ 'paper' holders. Getting a PhD early in life may limit your job opportunities. It, actually, may hinder your path to 'riches' unless you want to do politics (or a hot political topic...). Upvotes: -1 <issue_comment>username_10: My personal opinion is that you don't take a degree of any kind and then see what you can do with it, but rather the opposite: first you need to know what you want to do in your life, what makes you enjoy it, etc., and then you do what it takes to achieve it, including taking a PhD if needed. In my case I have a MsC on Computer Science and while I've always loved computers since I was a kid, and I also have my small personal IT projects once in a while, I find myself happy with my daily work challenges + playing and creating music during my free time. I don't have a PhD on CS nor any degree in music, the knowledge that I have is enough for me to achieve what I want + when I miss any knowledge I know where to find the resources to learn what I need (learning of course never ends when you finish BsC, MsC or PhD, it's a constant thing). Therefore if a PhD will help you to achieve what you want, it's - of course - a great idea. If you are not sure if it will help you or you are actually sure that it won't, then what for will you spend a significant amount of your life taking it? You'll be better off (in my opinion) doing other things that you like the most. Each one of us has his/her own needs. Upvotes: 2
2019/10/08
1,540
6,711
<issue_start>username_0: In general, the job of a lecturer (under Australian terminology- I think the terminology is different in America) can be divided up into two main areas of focus: research and teaching, with the latter involving writing up lesson and assessment plans, managing your tutors, delivering lectures to hundreds of students and maybe some tutorials to a few dozen students, marking and moderating assessment, etc. My understanding is that the people hired for these roles are often PhD students or postdocs who have also worked as tutors during their time as PhD students/postdocs - their tutoring gives them teaching experience, while their work as a PhD student or postdoc gives them research experience. However, because tutorials are generally much smaller than a full lecture, and they would often be following someone else’s lecture notes rather than writing their own, I was wondering if it might be advantageous for someone in those positions to get some experience preaching in a church? Much like a lecturer, a preacher would need to write their own lesson plans, and then deliver the content of those lessons to an audience of hundreds of individuals, in a format fairly similar to that of a university lecture. As a result, I could see an argument about why delivering Sunday sermons would be advantageous for developing some of the skills a university lecturer would need. However, I’m not certain if it would actually work in your favour if you were to list that experience on your CV. And so, my Question: would a typical young researcher who is seeking a lecturer role find it advantageous to list experience working as a church preacher? While religious discrimination would of course be illegal, might they find it working against them?<issue_comment>username_1: Would preaching at a church help you to become a better lecturer? Almost certainly, as would any similar experience. During the later stages of my PhD I worked in a very popular small museum giving tours and answering questions from the public. I always tell my students that it was the best way for me to hone my speaking abilities: I really learned how to catch attentions and to convey information in a stimulating way. Further, answering hundreds of questions every day (some of which were strange to say the least) prepared me to diplomatically deal with whatever questions are tossed at me after lectures, talks, and conference papers. If preaching does similar for you, you'll see benefits. Would putting preaching at a church on your CV help you land an academic job? No, except in the rare case of that being directly relevant to your teaching post: e.g. I had a housemate who did a PhD in religion, taught at a missionary training school, and landed a job at a religious affiliated Uni in the religion department...they would benefit from preaching on their CV. Just as my museum experience was considered "not lecturing", preaching would be the same. You're simply not drawing on the same pedagogical focus, nor are you teaching towards markable learning outcomes. Edit to answer your final question. No, I truly doubt you would face any discrimination for mentioning preaching on your CV. You would, however, face panel members who would fail to see why such information is relevant on an academic CV. I didn't put my museum job on my academic CV when I was an early career researcher because it wasn't related to my academic work and was, thus, tangential. Upvotes: 5 <issue_comment>username_2: Yes and no. It could even be a terrible hindrance Personally it will help you develop communication skills and give you experience reading the public and to modulate your voice. BUT it could hinder your career and damage your reputation. As @username_1 mentioned, mentioning that sort of experience would, at best, not be relevant. But at worst it would be a direct affront to science, progress and academia. After all, humans can divide atoms and re-engineer genes, but here is a primitive 'scammer' that has been at selling 'the opium of societies' to people to keep them like controllable lambs ready for the slaughter. Yes, I know it sounds extreme and exaggerated, but it's not unrealistic and the opinion on religion would vary from countries, institutions and individuals. None the less, churches are independent lucrative organizations normally not associated with universities. More so, in many countries, by law, its illegal to preach religion in (public) schools, and the school authorities would see you as a possible violator of laws. Another reason for not consider preaching as experience or at least not mention it, its that the public is quite different. Religious people dont go to church to learn and be educated, nor do they ask questions or establish debates on the topics like it happens in classrooms, academic conferences/symposiums or seminars. So the experience would not be relevant. Rather than churches perhaps you could consider imparting classes on the field of your studies during weekends, even if particular ones or as a consultor. That Do counts as experience. Upvotes: 2 <issue_comment>username_3: Well, any public speaking helps you develop your lecturing skills. Thankfully, as a PhD student, you will usually have more than enough chances to train this specific skill - though teaching, but also through giving research talks, speaking at seminars, and possibly at public events. Additional speaking experience outside of a university setting may of course help, but how much is a different question (and also depends on how much you still have to learn in this particular dimension). > > would a typical young researcher who is seeking a lecturer role find it advantageous to list experience working as a church preacher? > > > Not really, no - but not because of religious discrimination. Firstly, despite the name, the practical skill of "lecturing" isn't key in the assessment of future lecturers in the first place - your research agenda and overall teaching evaluations carry significantly more weight than whether you are extremely good at this individual element of teaching. Secondly, you seem to be overestimating how different it really is to speak in front of a few hundred people in contrast to 20 or 40 (something that any PhD has typically routinely done during their studies). A larger crowd may initially be intimidating, but in my experience this feeling passes relatively quickly, and then it really does not matter if you are talking in front of a handful of people or a full classroom. In that sense I can't imagine that whether you have been preaching or not makes any real difference to a selection committee. Upvotes: 2
2019/10/08
619
2,656
<issue_start>username_0: I'm applying for faculty jobs and was considering including sample syllabi for courses I could develop (which are currently not offered at the target institution). My idea was to include them as an appendix to my teaching statement. Would this be a good idea? Are there any pitfalls I should look out for? In particular I'm thinking for US liberal arts colleges, but I would be interested in views for more research focused universities as well.<issue_comment>username_1: I used to be faculty at a US liberal arts college, and am now at a research university. I think the appropriate decision here depends mostly on their instructions. Follow what they requested. If they do not request certain kinds of materials, then including them might be an annoyance because it requires more work organizing and (hopefully) distributing your application to a committee. Any difficulties in that process can downgrade your application. If they ask for a 1-page teaching statement, then I would recommend not including syllabi. If they ask for a 'teaching portfolio', then you could include them. In either case, you could post them online and link to them with a URL in a shorter document. Finally, I think an entire syllabus might be too long for an application that doesn't ask for it, but you could consider a 400-word summary. Upvotes: 0 <issue_comment>username_2: I take it that the position you are applying for don't specifically ask for such samples. In some cases, hiring committees will look at only what is asked for, nothing more. This is to protect the commitee's time and to ensure that each candidate is considered in equal light. While you likely won't be penalized for offering more, don't be surprised if the extra pages end up in the trash can before the committee sees them. Upvotes: 1 <issue_comment>username_3: Sadly, your attempt might be interpreted by some as criticism of why they aren't already doing that. There are some people and some institutions that don't really welcome change or even growth. "This is how we do it here. Get used to it." (The "smart ass" was implied, not spoken.) I don't think many would admire such attitudes and I hope that most would welcome alternate ideas and experience, but I've actually been in such situations and learned that the best way to survive until I escaped was to just keep my head down and STFU. I hope you don't encounter that, but it happens. In an application, give them what they ask for. And if you find yourself at such a place, plan your escape. And at any place, take some time to evaluate the local culture before you try to make many changes. Upvotes: 0
2019/10/08
1,559
6,763
<issue_start>username_0: **Edit**: I believe this is not a duplicate of [Are there instances where citing Wikipedia is allowed?](https://academia.stackexchange.com/questions/19083/are-there-instances-where-citing-wikipedia-is-allowed), because that question asks about citing Wikipedia for knowledge that is very *basic* but is nonetheless highly *relevant* to the core subject of the paper. My question is about citing Wikipedia for something that is, by contrast, unrelated to the core subject matter of the paper, and thus is trivial in the sense of being unimportant rather than trivial in the sense of being common or basic knowledge. Many previous questions on this site touch on citing Wikipedia, but none of them seem to me to address this specific issue. I am writing a statistics paper. The focus and original work of the paper is abstract and theoretical. To introduce some of those ideas, I have a silly little thought experiment involving golf, and choosing which properties one wants one's club to have. To be clear, nothing about the paper's actual focus is relevant to golf. The golf example is just a helpful, concrete way to introduce the perspective I am taking in the paper on certain statistical techniques. To flesh out this example, I need to be able to describe the various options one has when choosing a golf club (head material, shaft length, etc.). I have learned from Wikipedia everything I need to know about these options in order to describe my silly little thought experiment. Would it be inappropriate to cite Wikipedia in a case like this? Would reviewers/readers raise an eyebrow at my doing so? Do I really need to spend time reading primary sources on various golf head materials just to be able to include this silly thought experiment in my paper?<issue_comment>username_1: No, its not inappropriate, but is a bit dangerous since wikipedia is a moving target. Things there change, though for your example, probably not enough to be a large concern. But you should, at least, give the date of access as well as the link to the article (or the paragraph within the article). Note also that over time, things can change a lot. Articles printed on paper in journals distributed via sneaker-net have a long lifetime. Centuries, perhaps. Will wikipedia still exist when you are ready to retire in 30 years (say)? Perhaps it will. But a lot of online social media sites have simply disappeared in the past 10 or so years. That isn't necessarily a reason not to use such resources, but you should be aware that the world as it looks today will change a lot over your professional lifetime, so be prepared to speak to future readers, not just the current ones. Make it universal and eternal if you can manage it. Exceptions, of course, if you are explicitly discussing current forms of media. Upvotes: 2 <issue_comment>username_2: Yes, you should look at the golf literature itself. It is relatively accessible (not like drug design). You can find good review articles on club construction and selection. Because you are mostly looking for an example for your method (versus really analyzing the golf clubs), I think it's fine to just pick one decent golf club review article (even a popular one, or from a manufacturer) and use it, versus totally analyzing the literature. For example if one article has 6 attributes and another has 7, just use whichever you choose. Also, of course, make it clear in your own text that you are just doing an illustrative example, not writing a definitive article on club selection (e.g. a caveat that some other articles use different number of attributes). Statisticians should have the ability and the proclivity of skimming literature from many other fields. You don't need to become an expert. But you do need to have some level of outreach. Consider if instead this was oil exploration or drug design (more momentous business decisions). Note that this habit of dipping into the literature of other fields, not only keeps you grounded on those fields but will tend to give you ideas and insights for applications of statistical methods. If you stick with Wikipedia, you are relying on user content, that is not well archived. In addition, the cite will be distracting to your readers and reduce trust. Since it's just an illustrative example (you could pick anything), there's no big need/plus for introducing a Wikipedia citation. If you show too much the attitude of "who care's, it's a silly example", it will distract the reader. This is not just an issue of the Wiki cite, but your general approach on the illustrative example. And no, I'm not expecting a master's thesis of research on the example problem. But still be thoughtful about the problem selected (field, clarity, references, etc.). If you are not, it will be distracting to the reader and reduce trust/interest in reading the details of your method or thinking about how it (or you) can be useful in applied settings. Upvotes: 4 [selected_answer]<issue_comment>username_3: The real question is whether you should cite anything at all. You're not writing an article on golf clubs. You're writing an article on statistics. Is your example so complex that you need explanations upon explanations about why you're choosing this parameter, and that, etc, and if a reader doesn't know all this then the example is not understandable? It it really that important? Then you're wasting your reader's time with a digression that's largely irrelevant to your scientific work. Is the example sufficiently simple that to understand it, very vaguely knowing how golf works is sufficient? Then why would you want to cite anything? I don't need a scholarly article to explain that if I were to choose a golf club, I need to consider its length, its weight and so on. It's pretty much common sense. Do I know more than that? No. Do I care? No. Is a statistics article the place to learn about such things? No. Upvotes: 3 <issue_comment>username_4: There is a paradox and an important issue to tackle here. > > very basic but is nonetheless highly relevant to the core subject of the paper > > > If you cite Wikipedia, make sure you cite it verbatim as Wikipedia is a moving feast as username_1 mentioned. However, if it is so relevant, why is it so difficult find a more reputable source to cite it? Sometimes it is a practical knowledge and sometimes it is "professional knowledge" that needs to addressed in your citation and discussed. To claim the citation as self-evident because it is from wiki is problematic though. Wikipedia is not an authoritative source. Maybe contrast wiki with another sources as to why this highly relevant but yet trivial point is not easily accessible in the literature? Upvotes: 1
2019/10/08
3,054
12,793
<issue_start>username_0: I'm a PhD Student in the US and am going to a conference for poster presentation soon. I just realized that my poster will be right next to my former advisor's poster (To be precise, it's his student's poster). To explain the history with the advisor, I worked with him for about a year. But he was emotionally abusive and unsupportive. I had panic attacks whenever I had a meeting with him. After a year in the program, I quit and applied to another program again. This second program has worked out really well, and I'm very happy with my current advisor. The separation with the former advisor seemed okay at first (he wasn't upset or angry about my decision at least when I talked about it. He sounded like he understands my decision). But then, after the conversation, he started ignoring me, which means that he wasn't okay at all. The thing is that I'm now having a panic attack and become very anxious after realizing that my poster is right next to his. It seems unavoidable to run into him at the conference. I'm really stressed out as this reminds me of all the traumatic events I had gone through. But at the same time, I want to use this opportunity to overcome this anxiety associated with him. Can anyone give me advice on how I should react when I see him at the conference? How can I react in a professional way? Should I just ignore him? I can say hi to him, but I'm concerned that he will ignore me (This happened previously, and likely to happen again). Any suggestions or advice will be appreciated! **UPDATE: Thank you all for very helpful suggestions and support! I think it's too late to relocate my poster. But after reading all the comments, I start to think maybe I just misinterpreted his neutral response or indifference as something negative or aggressive. Maybe I was overreacting initially. After reading everyone's comments, thinking about it for a couple of days really helped me reappraise the situation. Thanks all!**<issue_comment>username_1: Ignoring him and refusing to respond to any uncomfortable advances or comments might be best. You don't owe him anything, especially the satisfaction of making you feel uncomfortable. Focus on your poster and on interacting with those who are interested in it. Do what you should do anyway and make some professional contacts at the session. There is no need to do anything more than if the next position over was by someone unknown to you and whose work you aren't interested in. You have just as much right to your space as anyone else. Dominate that space and try to let the rest go. With practice such uncomfortable encounters, that may occur from time to time with others, will become easier to manage. And by "dominate that space" my intention is beyond the confines of a poster location. --- There is an outside chance that the conference leadership would respond positively to a request to be moved. "I'd be more comfortable elsewhere" is all you need for an answer if asked why. Upvotes: 3 <issue_comment>username_2: > > I'm really stressed out as this reminds me of all the traumatic events I had gone through. But at the same time, I want to use this opportunity to overcome this anxiety associated with him. > > > Are you having profesional help and are you sure you want and able to manage this without affecting your performance in the conference? Your desire to overcome is reasonable and probably is the right thing to do. However, overcoming emotional trauma or stress triggers might require gradual work and most likely professional help. I just worry you **might** end up worse than you are right now. > > Can anyone give me advice on how I should react when I see him at the conference? How can I react in a professional way? Should I just ignore him? I can say hi to him, but I'm concerned that he will ignore me (This happened previously, and likely to happen again). > > > The answers you will recieve here most likely cover the professional aspects of this interaction. I guess anything in the range of "smile and wave" to "small talk and questions about poster" would be approperiate. But again, I really believe it is up to you and a medical professional to decide to which extend you are comfortable and emotionaly ready. Upvotes: 3 <issue_comment>username_3: This doesn’t answer your explicit question, but have you considered sending an email to the conference organizers asking for your poster to be reassigned to a different part of the poster presentation area so that it’s not right next to the former adviser’s poster? This could go a long way towards minimizing the possibility of trouble, and will probably make it easier for you to handle the task you have set yourself of overcoming the anxiety you have related to meeting him at the conference, while leaving you some wiggle room to maneuver in case that turns out to be more difficult than you expected. The email doesn’t have to go into details, just say this is a person you’ve had some conflict with in the past and you would prefer for your poster not to be close to theirs. If I were an organizer I’d be more than happy to accommodate such a request. Good luck, I hope things go smoothly and that you’re able to focus on sharing your work with others and having fun. Upvotes: 6 <issue_comment>username_4: Be natural ---------- Greet him the first time you meet him, shake his hand, say "how's family," or whatever suits the circumstances. For you, what happened is water under the bridge. You've parted ways and you're now under a different advisor. He's got no power over your career. Even if he tries to subtly insult you (which I think he won't, because as you said, he's simply ignoring you since), there is **absolutely no reason for you to feel bad** about it. If he indeed does so, that would be **pathetic of him** and you should **feel pity** about him. Worry about the important stuff and the people that matter to you; not for grown-ups whose behavior is stuck in 6th grade and you've only known for a year or less. Do NOT ignore him! ------------------ Ignoring him will only bring more awkwardness to the situation and more stress to **you**. Imagine the set-up. You're both going to be in the same place for quite some time (a couple of hours?) and in close proximity; you'll both be standing in front of your poster and there's going to be times where there won't be anyone around asking questions. Actually, there might be moments where he's going to be the only one around. Trying to ignore him will only add more pressure to **you** ("oh my god, we've just had eye contact! aaaargh!"). Ask for help ------------ Having been through similar situations with my PhD advisor in the past, I agree with Noah's comment that you need to get help to overcome your panic attacks. It doesn't necessarily need to be a therapist; it can be friends, family, trustworthy people you feel comfortable discussing it with. You need to learn to manage such situations because you'll get many of these in your life. To move forward, you need to stand on your feet and handle them, **not** ignoring them and praying they won't happen, because they will, anyway. Upvotes: 5 [selected_answer]<issue_comment>username_5: **Tl;Dr**: Relax, you won't encounter them. Probably. So let the stress, you feel now, to influence you only in the case they are actually near your comfort zone. And have a talk with your current advisor. You are neither The first neither The last one with such issue. --- I suppose this is not a first time the conference, you are about to attend, is organised. I also suppose you are not the first PhD. student of your current advisor/ first student in the department. Therefore there is some history to get the information from. So relax, calm down and DON'T PANIC! (written in large friendly letters) Ask your older schoolmates, ask postdocs. You can ask your advisor. The question is: How are the posters and other contributions organized? There is high chance the professor won't be there at all and even higher chance they will be somewhere else most of the time. Usually, there is one person per contribution discount for academia, others have to pay full price. You can ask for a list of presenters and look for the professor. From my personal expirience my advisor was never attending the conference I was attending, although he was one of the authors of the poster/oral presentation. The department head did not interfere with their PhD. students' presentations unless they were caught passing by and namely asked. In the only case a professor had a poster presentation I've witnessed (because I had my poster next to their) I saw them two times during the poster session. The longer one was when we discussed our papers for couple of minutes. In the rest, they were watching the other posters. "My" department also applied the policy that the PhD student has poster for their first conference (to see how conferences work) and oral presentations for the others. - You can see how are the presentations evaluated. Don't take it badly but they are more likely to look for something more important than mocking you. You are no more their pet and you have no obligation towards them. Besides basic ethics, obviously. Upvotes: 2 <issue_comment>username_6: Focus on yourself and your actions, not his. If you see him, be polite. Smile and say, "hi, it's nice to see you!", or just simply "hi". Act in a way that makes you proud to be you. The way he decides to react is not up to you. He can choose to be polite and have a conversation. He can choose to ignore you, and be super rude and turn around and walk away. You will know that you behaved well and did your best. That is all that matters in life. If he's a rude person at heart, people already know that. If they see him be rude to you, they will know it's him and not you. If anyone asks "what was that all about?", say "I don't know, maybe he's having a bad day." Or, be honest and say that you think he might be upset because you changed advisors (but that could open up conversations that are too personal and could harm his reputation.) In any case, don't be preemptively rude yourself "just in case he's rude". Also, try to not assume reason behind actions. It may be true that "he wasn't okay at all". Or maybe he's a very busy person and decided to put minimal effort into your relationship because you two no longer have anything to do with each other professionally or personally. Of course, you might be right about the "why". Maybe he's not ok, and he's insulted or has his feelings hurt. No matter the case, you should still be pleasant. Just be the best You that you can be, and these sort of situations will work themselves out and you can be at peace knowing that you did what you could. Upvotes: 2 <issue_comment>username_7: Short and hopefully practical advice: 1. **Try to get the conference organizers to re-position your poster or his student's poster.** When you give a justification, just tell them he's your former advisor and that you parted on poor terms (you don't need to start explaining what he did or what you did etc.) They are likely to oblige. 2. If the organizers don't move your poster, **consider moving the poster yourself**: Very often posters are mounted on mobile boards - move yours someplace else; or just find an empty slot elsewhere and do it. 3. **Ask a friend or a colleague to stand with you near the poster** during the poster presentation time slot. While in practice that friend may not need to do something, it definitely helps with the pressure and anxiety to know that someone "has your back" vis-a-vis that advisor guy. 4. **He will likely not spend a lot of time near the poster:** It's usually the student who stands near the poster, entertaining passers-by. The advisor might be there for a while but not for long; and while the advisor is there, it'll probably be busy (e.g. poster presentation time slot). 5. **Don't completely-ignore him, but rather mostly-ignore him.** If you outright ignore him, that's paradoxically a bit confrontational and may elicit a reaction and interest out of him. Instead, say hello to him, politely, but without engaging in any conversation. If he asks you something trivial (e.g. "Are you presenting this poster?") give a brief answer ("Yes, I am.") without continuing the conversation. If he tries to engage you in a more serious conversation, tell him something like "I'm sorry, but I have to stay focused on my poster right now." When you're standing next to the poster, don't focus your attention on him - but at the same time, don't actively try to avoid seeing him. This should be the most "disarming" behavior, minimizing the chances of a clash or argument. Upvotes: 1
2019/10/08
608
2,570
<issue_start>username_0: I recently had a concussion. I got everything on paper signed by a doctor, and emailed my teachers immediately. During my medical absence, I missed a quiz. I emailed my teacher regarding making the quiz up, and he said “that’s what the quiz drop is for." However, I had already my quiz drop due to the flu. I told him I don’t think it’s fair that I have to get a zero for one of the quizzes when both instances were out my control, but he just stopped responding to me. I have followed up twice. What should I do?<issue_comment>username_1: Your professor has probably decided that make-up quizzes are time consuming and not worth it -- students need to learn that sometimes missing an appointment has consequences etc. I'd recommend checking if the professor is willing to drop 2 quizzes instead of just 1 for the whole class or for anyone who has a doctor's note. That way you are working with your professor's plan rather than negating it. Upvotes: 2 <issue_comment>username_2: It depends on your institution. In UK universities, the process for handling multiple missed assessments due to medical reasons is normally conducted by a designated panel of academics, rather than being at the discretion of an individual course convenor. In general, there is a limit to how far an academic course will accommodate missed assessments (it would be absurd for somebody to be granted credit if he/she had missed 100% of the assessments, no matter how compelling the reason), and it sounds like you have breached it. In your response to the issue, you need to distinguish between: * your feelings that the limit is too harsh; and * the fact that you breached the limit is not your fault. It sounds like your course convenor has determined that he/she is willing to disregard one assessment, but not more. If the reason for that policy is that he/she considers that disregarding more than one assessment would give an inadequate evaluation of your acumen, then your best recourse may be to ask for the whole course to be disregarded, and to be given an opportunity to start it again from scratch. Upvotes: 2 <issue_comment>username_3: Check your school's rules regarding missing assignments due to medical conditions. If it says you are excused as long as you present proof , then go to the department of the class you are taking, with the medic recipe and the mail you send the professor plus his response and they will have another professor give you the quiz. If they don't dont then escalate and go to the direction of the school. Upvotes: 0
2019/10/09
469
1,929
<issue_start>username_0: I've recently submitted a paper for review for a reputed math journal. The main paper is just 2 pages (everything else being in appendix), where I setup a new problem, and the central result I had left unproven. (I said the proof is yet to be done). I did not send this paper in a hurry, I had spent a good amount of time and best of my abilities to prove it, but couldn't. The paper cleared editorial screening (after being there for a few weeks) and went to "Under Review". What could be the reason to go to "under review", when there is no proof that has to be reviewed. I'd like to understand, what are all the things that are "reviewed" for a math paper that is "under review"?<issue_comment>username_1: There's a good chance the journal is getting confirmation that it is indeed a new problem. The fact that it's new to you does not mean it's actually new - perhaps you've simply not seen the paper(s) that stated and maybe even solved the problem. The journal could also be confirming if the problem is actually interesting. It's not so difficult to [come up with a new problem](https://academia.stackexchange.com/a/51943/84834), but coming up with an interesting new problem would be something else. Upvotes: 7 [selected_answer]<issue_comment>username_2: Why *woudln't* they review your paper? All reputable journals review all papers that they consider publishing.. The fact that there's no proof to check doesn't mean there's nothing to check. Arguably, in a mathematics paper, it means there's more to check: why should they accept your paper that doesn't prove anything? The fact that you put everything in an appendix doesn't mean it doesn't get reviewed. The journal will be publishing that appendix, so they want to know that it's OK. (Otherwise, everybody's next paper would be "Abstract: [blah blah] Introduction: See appendix." and publishing just got a whole lot easier.) Upvotes: 4
2019/10/09
1,172
4,999
<issue_start>username_0: After some decades an event on my research area was announced in my city, so I decided to send my abstract, which was accepted. However, they recently sent me the programme and among the keynote speakers was a person whose doctoral thesis contains uncontroversial plagiarism. (About 30 pages of his thesis are just a translation from two papers written in English by other scholars into the language in which the thesis was written.) There are no legal charges against this lecturer. Instead, the plagiarism was revealed by an apparently anonymous source about two or three years ago through a mailing list. I know that at least some of the organisers of the event were in such mailing list, so they do not ignore the charges and had the chance to test whether they were true or not. I have considered that maybe the best option to withdraw my lecture from the event explaining exactly why to the organisers. This, though, would not help at all, since they already know and do not care. Else they wouldn't have invited him as **keynote speaker**. Other option that somebody suggested me is that I use some minutes of my lecture to publicly denounce him. (It just happens that my regular lecture starts right after his keynote lecture finishes.) But there are three problems with this: 1. When publicly denounced, this lecturer threatened to prosecute the source for defamation. (He's also lawyer.) I currently do not have a job nor a regular income that would allow me to face a prosecution, let alone to prosecute him. 2. It is very likely that most of the people that are going to the event already know. In such case, and since this is fairly yesterday news, I do not expect that there be any important reaction. 3. I am about to enter in the phase of defending my thesis, and this lecturer works in my institution as invited professor. Unscrupulous as he is, he may try to retaliate against me by using his power to delay my thesis dissertation. (I do not know how much power he has.) Finally, you may be surprised as to how a whole academic community could care so little about this case. But I have to say that when the news about his plagiarism came, I myself didn't dare to publicly speak disapproving his behaviour because of what I said in the third point. Still, it was surprising to me that, although I know that several professors disapproved of his actions, apparently none of them publicly manifested their disapproval. I later knew that this wasn't the first time that something like this happens in my institution. I'd like to know what is the most ethical or less unethical thing to do in this case. One thing is for sure, though. I do not think it would be right for me to go to that event, and speak after him as if he were a respectful scholar. And I am not going to. Thanks in advance for any advice.<issue_comment>username_1: What really matters here is what your goal is. Is your goal to: (A) cause there to be some sort of disciplinary action/reaction against this person; or (B) to make sure that your own name is not tarnished by association with this person in the future. If you are hoping for (A) I believe that you have answered your own question. It does not appear that any action will be taken against this person based on anything you do in relation to this conference. The discipline is covering this up and everyone is going along with it and as an early career researcher, you are not influential enough to change this. It's sad, but it's true. However (B) is entirely possible for you. If you believe that one day this person will be publicly shamed and denounced fully by the discipline, it is not unreasonable for you to not want your name associated with theirs. This might mean withdrawing from the conference and specifically asking that your name be removed from any material online about the event. Indeed, you don't even need to say *why* you are pulling out of the event if that includes a risk that it will get back to the person in question. In this situation, I think (B) is a reasonable concern and a reasonable reason to not speak at a conference. I also think that you are tangentially associated with this person so are unlikely to experience any negatives from speaking after them at the event. Ultimately, with risk of association low, it comes down to your own ethical stance and if you feel sharing a stage with this person violates that. Upvotes: 4 [selected_answer]<issue_comment>username_2: Your question is really silly. You are going to attend a lot of events in future in your career. Are you going to check on every keynote speaker on whether they have plagiarised in any of their works? Secondly, its not your obligation to bring to light, this plagiarism in public. There is a system, and the advisor and review committe of the plagiarised thesis are the ones who should be really concerned with this. Its their business/headache. You are asking, Can I buy other people's headache? Upvotes: -1
2019/10/09
1,065
4,354
<issue_start>username_0: In the context of my thesis research I have recovered the data of some important old papers whose data is sometimes referred by the name of its authors, but that are never explicitly cited. I though it important to refer all the data of these papers somewhere, but if were to I include them in the bibliography/references section, I may be giving the impression that I have read them. How should I point to these works that I do not need to read, but that some readers of my work might be interested in reading? I know the answer might depend on the citation format I'm using or the type of publication. But I haven't seen any explicit indications for doing this kind of citation. Thanks in advance for your help.<issue_comment>username_1: No one standard has been set for citing data sources. However, a quick search shows that citing data sources is generally done the same way you cite a paper. See [here](https://guides.nyu.edu/c.php?g=276966&p=1846656), [here](https://datacite.org/cite-your-data.html) or [here](https://libguides.lib.msu.edu/citedata). These links provide guidelines and examples. The following quote is from the first link: > > Remember that the purpose is to help your reader re-trace your steps -- more information is better than less! > > > Which I think is a good guide line. In order to ensure that you do not give the impression that you fully read a paper, you can simply do something like: > > The used data is obtained from [citation] and has be preprocessed... > > > A further discussion on whether or not you should cite a paper you did not read can be found [here](https://ori.hhs.gov/citing-sources-were-not-read-or-thoroughly-understood), but that does not go into using the data from the paper only. Upvotes: 1 <issue_comment>username_2: You skim those papers to verify that the data is indeed there and then you cite them as a source of the data. If you have the time or they are otherwise relevant, read the papers to figure out that the data is suitable for your purposes and there are no glaring problems there. Sometimes, everyone cites something because everyone else has been doing so, but the citation might be wrong (in detail or completely). 1. An explicit citation is useful, because it lets you and others verify the result. 2. Checking that the cited paper includes what you suppose it includes is even more helpful, since it might just be a rumour that the thing is included there. 3. Reading more of the paper would be even more useful, since it might have qualifications that have been omitted by the citing literature, or methodological issues, or whatever else. Mentioning these with the citation would be good. 4. Some papers that cite the original might contain errata, important qualifications, etc. Hence, it might be worthwhile to check what cites the classical papers, especially if there is something by the same author(s) on the same idea. I think all of the steps above increase the value of the work, but all of them also take time. Up to you to decide where to cut off the process and how far to go in vetting the data-to-be-used. But consider also the next new (PhD) student who would like to use the same data? What would you like to have known or had when using it? If it was non-trivial to track down the data, then certainly provide a reference, for example, so that the next generation will have an easier time. Upvotes: 3 [selected_answer]<issue_comment>username_3: Essentially you cite the sources you use, something like: "...Jones's equation, as described by Smith[1]..." if you only read Smith's textbook. However for important results, it's worth really trying to track down the original, skimming it, and citing it properly. It may well take some tracking down; don't leave it until the last minute in case you need an interlibrary loan of a paper copy. As an example, a couple of the results referred to in my thesis were published in German. My German isn't up to fully reading poorly scanned papers, but I can check that a figure caption is what it claims to be, or decipher the context and terms of an equation. I therefore cited both the German paper and one that explained and applied it nicely in English. If you can possibly get your hands on a copy of the original I suggest you do something similar. Upvotes: 1
2019/10/09
566
2,479
<issue_start>username_0: About six months ago I submitted a paper in one of the very reputed IEEE Transactions. I have observed from different papers published in that journal that the average time for receiving the first review is around 3 months. That is why I politely inquired about the status of the manuscript after around 4 months from the date of submission by sending a mail to the associate editor (AE). The AE promptly replied me about the status and said that two out of three reviews were still being awaited. However, it is now six months and I still do not see any change in the status of the manuscript in the online portal. So, my question is: is it too soon to send a second mail to the AE asking about the status of the manuscript and can such second mail irritate the AE to the point that it might negatively impact the review process of my manuscript?<issue_comment>username_1: I think it should be fine to inquire (politely, of course) at this point. While it may be a minor irritant, it should not affect the outcome of the acceptance decision provided the journal follows ethical practice. If you have a decision to make about the paper it is important to get the information. If it is just to ease your uncertainty then you could also let it go a bit longer. But there shouldn't really be a downside. Maybe a note would get the editor to prod the reviewers a bit. Upvotes: 2 <issue_comment>username_2: It is not rude to ask. It is rude to not provide an answer, either positive or negative. If they ignore you and just don't answer then they are wasting your time. Do inquire again politely, asking for a response be it either positive, negative, or a future day to wait for a response/inquire again. Dont let them NOT answer, send a copy not only to the AE but also to their marketing or PR team or the closest equivalent. But do check first if in their site or somewhere there is a policy about answering times, in which case you can mention it. If they dont answer ask again by social media and openly. You can too ask other authors how long it took them to get an answer so you can reference that too, however at this point you should be already looking for another journal for your paper. Things are changing and traditional institutions and media are finding out that they don't have all the power anymore. You are not a beggar, you are a contributor and your work and time should be respected at least enough to get an answer. Upvotes: -1
2019/10/09
440
1,920
<issue_start>username_0: We published a paper in a theoretical CS conference last year, and were hoping to submit it to a journal when we found out that our result was improved in a followup work. While it's clear that there are no ethical issues with submitting the extended manuscript to a journal, its results are inferior to the recent followup paper. * **Would we get the same consideration as if the new paper does not exist (i.e., comparing the results to what was known before our original publication)?** * **Assuming that the answer is no, is there even a point in submitting it?** To be clear, we do mean to cite the newer result; my question is about how likely our paper is to get accepted under the circumstances.<issue_comment>username_1: Unless you have something new to add, it isn't likely that the journal will want to publish your longer version. You could submit it, of course, but it is no longer especially "novel" following the "better" result. In CS, of course, your conference proceedings article stands on its own. Maybe it is best to continue the work and try to leapfrog the newer paper so that you have something additional to say in the follow up. Any reviewer knowing the whole story might make the same comment. Upvotes: 2 <issue_comment>username_2: This is a perfectly normal timeline in theoretical computer science: 1. Result A is published in a conference. 2. Building on result A, someone else publishes followup work B in other conferences. 3. A polished version of result A is submitted to a journal and published there. The journal paper refers to result B and also discusses how follow-up work B has already improved on the results of the paper. Here the fact that there is already follow-up work is a *good* sign that will help you to convince the reviewers that this is an important result that deserves to be published in a good journal. Upvotes: 3 [selected_answer]
2019/10/09
2,300
9,984
<issue_start>username_0: I received a request from the sponsored projects office person at a well known university, asking me to peer review a grant proposal that was in preparation at their university. The idea was for me to help their researchers to write a more successful grant proposal. I felt that it was something of an abuse for them to even ask me, and they were damaging their university's reputation by making such requests of people. They wrote back to correct what they termed my inaccurate and/or disagreeable assumptions and conclusions. They said in effect that I am a bad academic citizen for not being willing to help them with this. My questions: Is this a reasonable thing for universities to ask? Or is it unreasonable or even damaging to their reputation to be soliciting this kind of help? [Edited for length. If you want to see the original email exchange, look at the edit history.]<issue_comment>username_1: Science, and academia more broadly, is and should be a community effort. Of course, we should all work to support the common cause of advancing and disseminating knowledge. Science funding is a different story. Funding agencies establish a competitive process for different researchers or groups of researchers. It is hard for me to imagine any good alternative to a competitive process for funding, but we can't deny that any specific funding competition is a zero-sum game. Thus we as scientists unavoidably relate to each other in two fundamentally different ways: As collaborators in a global scientific quest and as competitors. (It's not just funding that forces this on us: For example, we also unavoidably compete for job opportunities.) Universities have a direct financial interest in having their researchers win competitions for grants. So they put resources into improving their researchers' grant proposals. One can't fault them for doing so. But we should be candid about what the universities are specifically trying to do when they put resources into improving their own grant proposals: They are trying to win more grants. If they also contribute to the global effort of scientific research because grant proposals get better, great, but that beneficial effect is better understood as arising directly from the competitive process. If [Unnamed university] wins more grants, then undeniably, some other institution wins fewer grants. That is the university's goal, and specifically the goal of [A particular Office at Unnamed university]. But that is not my goal, and in fact opposed to my goal with respect to funding competitions. I object to being asked to help [Unnamed university] win more grants, and I strongly object to the insinuation that, because I object, I am not a good citizen of the scientific community. It appears to me (through very unscientific departmental-lounge polling) that others agree with me. The reputation of [Unnamed university] has suffered greatly in my eyes because of this issue, and if, as I suspect, others feel the same way, [Unnamed university] is hurting its own reputation more broadly by acting in this way. Upvotes: 3 <issue_comment>username_2: Let me give a scenario in which it would be proper. I would be skeptical of other scenarios that aren't essentially similar. Suppose that I'm an experienced grant writer but am now retired. Suppose also that I'm not an active reviewer for funding agencies, though have done so in the past. I have a lot of knowledge that it would be good to pass on. Suppose a university, either the one I've retired from or any other, approaches me with a proposal to train their young researchers and to help them actively by reviewing a current proposal and giving feedback and advice on it. Suppose they also offer to pay me a non-token amount for my efforts. I would think that, then, it would be fine to agree. This is really no different from a commercial company asking me to train their employees in some state of the art technology for which they need advice. As with the company you have to remove the possibility of a conflict of interest. If I were not yet retired, then it gets a bit sticky. If it were my own university and I was "paid" by having other duties reduced for a while, then fine. If it is another university and I'm not retired then it is even stickier. To do a decent job of it takes time and effort, which should be compensated. But it is also very difficult to avoid conflict of interest scenarios. And, for an active faculty member, especially one who is still developing grant proposals or who is reviewing for agencies it becomes ethically "interesting", to say the least. Among other issues, the requesting university would probably want a "non disclosure agreement" for the work. What effect might that have on your own research program? If I review a grant informally and offer improvements and then am asked to review it formally on behalf of an agency, I have a conflict. I would ethically have to decline the formal review. The worst case is one in which the one asking for this service initially is actually trying to *create* such a conflict, guaranteeing that I could not be a formal reviewer - an attempt to take me out of the picture. I don't suggest that this is what is going on in this specific case, but it would put a cloud over the whole practice. So, I think that most scenarios are problematic. It is very different from the normal practice of reviewing papers, because reviewing is done on behalf of a third party: the journal or conference. And it is also different from informal cross-reviewing of papers within a circle of collaborators as it becomes a cooperative venture in that case. Upvotes: 1 <issue_comment>username_3: It does strike me that this is an improper request to make of active faculty at a different university, for reasons much as you say: funding is a competition, and you have your own people (and yourself) to look after. If you were retired, and disengaged from the university at which you'd worked, sure, why not take a larger view? "A rising tide lifts all boats", and all that. But, while you are "still in the game", it literally is that your competitors (or your students' or colleagues' competitors) are asking you for "insider information", in effect. Considering that universities' administrations visibly do not care so much about altruistic "greater good", but about external funding dollars, to consult externally in such a manner is obviously construable as sabotaging your own students, colleagues, department, and university. It is completely unsurprising that the reaction to your objection was on the order of "oh, come on, you're being a selfish jerk". Because they'd like to bully you into helping them, etc. No-brainer. Sure, if they can swindle people into helping their own competition, they have greatly succeeded, and will be happy. They have zero motivation to not do it, because (as you saw) they can package it as some mythical "greater good" thing, even while screwing over their competitors, whenever possible. Probably the larger point is that to engage with people embarked on such projects is futile and inevitably frustrating, because they've already made certain decisions, which often seem to include rationalizations about how exploiting/cheating other people is simply "good business" or is "clever" or ... something. So your only serious error is to engage with them and spend time and mental energy! :) Don't let opportunistic jerks fool you! Upvotes: 2 <issue_comment>username_4: You‘ve reached the correct conclusion, that you have no obligation to help researchers at another university improve their grant proposals and that you have better things to do with your time, but based on incorrect reasoning that it is unreasonable and abusive for the other university to ask for your help. It is reasonable (though probably pointless, as I suspect they will soon discover) for them to ask, and it is more than reasonable for you to say no. That’s all there is to the story really. As for the suggestion that they are “abusing the system”, to the very minor extent that there is an abuse here, it is a self-limiting type of abuse — a bit similarly to a beggar standing on a street corner asking for money. Some people may give them money, and that is their prerogative; you certainly don’t have to if you don’t want to. Upvotes: 1 <issue_comment>username_5: The odd aspect here, is that this request comes to you from grant support office of the other university, rather than the researchers themselves. It is perfectly normal to ask colleagues at other universities for feedback on your grant proposals. I regularly supply such feedback for colleagues. Why do this when there is a finite pot of grant money. * First of all, this is just being nice to people. Being nice has benefits of its own, in that people will be more likely to be willing to help you when you would need it in the future. * Second, typically the researchers you will be helping are part of the same (sub)field of research as you. By helping, them you are essentially helping your (sub)field to take a larger slice of the funding pot. There are many ways this can be helpful to you in the future. (Job/hiring opportunities, increased funding opportunities, etc.) Of course, this makes little sense if you yourself fishing in that particular funding pot. In which case, you have a good reason to deny such requests. The really strange aspect in this case, is that the request came from the grant support office at the other university. This almost completely negates the first list reason to comply with requests for feedback, as it diminishes the networking aspect. The second reason may or may not exist, but the fact the request did not come from the researchers directly implies that you did not already have a professional relationship with them, which reduces the likelihood that they are in the same (sub)field. Upvotes: 2
2019/10/10
2,373
10,246
<issue_start>username_0: I am working in academia in a very wide field of research mixing a lot of different domains. This broadness is also a problem, since there is no proper references for the basis of the theory, and we have to deal with a lot of material scattered in very different places, often in the form of lecture notes, conference proceeding or unpublished documents. Therefore, I would like to propose a collaborative online list of these references, in a light way so that anyone could add or improve the list. The best current list of references is [here](http://www2.math.ou.edu/~kmartin/afrefs.html), which is neither up to date nor really easy to use (no tags, list by authors, etc). I would like something slightly more powerful, I though about some framework like Django but don't really know if it is a suitable solution. I would like to be able for anyone to * sort in different ways * add tags * add or edit references entries * rate difficulty (this is a bonus) * comment (this is a bonus) Moreover, it should be better if the list were accessible without any amount on a simple webpage. Any suggestion in order to do such a list is welcome!<issue_comment>username_1: Science, and academia more broadly, is and should be a community effort. Of course, we should all work to support the common cause of advancing and disseminating knowledge. Science funding is a different story. Funding agencies establish a competitive process for different researchers or groups of researchers. It is hard for me to imagine any good alternative to a competitive process for funding, but we can't deny that any specific funding competition is a zero-sum game. Thus we as scientists unavoidably relate to each other in two fundamentally different ways: As collaborators in a global scientific quest and as competitors. (It's not just funding that forces this on us: For example, we also unavoidably compete for job opportunities.) Universities have a direct financial interest in having their researchers win competitions for grants. So they put resources into improving their researchers' grant proposals. One can't fault them for doing so. But we should be candid about what the universities are specifically trying to do when they put resources into improving their own grant proposals: They are trying to win more grants. If they also contribute to the global effort of scientific research because grant proposals get better, great, but that beneficial effect is better understood as arising directly from the competitive process. If [Unnamed university] wins more grants, then undeniably, some other institution wins fewer grants. That is the university's goal, and specifically the goal of [A particular Office at Unnamed university]. But that is not my goal, and in fact opposed to my goal with respect to funding competitions. I object to being asked to help [Unnamed university] win more grants, and I strongly object to the insinuation that, because I object, I am not a good citizen of the scientific community. It appears to me (through very unscientific departmental-lounge polling) that others agree with me. The reputation of [Unnamed university] has suffered greatly in my eyes because of this issue, and if, as I suspect, others feel the same way, [Unnamed university] is hurting its own reputation more broadly by acting in this way. Upvotes: 3 <issue_comment>username_2: Let me give a scenario in which it would be proper. I would be skeptical of other scenarios that aren't essentially similar. Suppose that I'm an experienced grant writer but am now retired. Suppose also that I'm not an active reviewer for funding agencies, though have done so in the past. I have a lot of knowledge that it would be good to pass on. Suppose a university, either the one I've retired from or any other, approaches me with a proposal to train their young researchers and to help them actively by reviewing a current proposal and giving feedback and advice on it. Suppose they also offer to pay me a non-token amount for my efforts. I would think that, then, it would be fine to agree. This is really no different from a commercial company asking me to train their employees in some state of the art technology for which they need advice. As with the company you have to remove the possibility of a conflict of interest. If I were not yet retired, then it gets a bit sticky. If it were my own university and I was "paid" by having other duties reduced for a while, then fine. If it is another university and I'm not retired then it is even stickier. To do a decent job of it takes time and effort, which should be compensated. But it is also very difficult to avoid conflict of interest scenarios. And, for an active faculty member, especially one who is still developing grant proposals or who is reviewing for agencies it becomes ethically "interesting", to say the least. Among other issues, the requesting university would probably want a "non disclosure agreement" for the work. What effect might that have on your own research program? If I review a grant informally and offer improvements and then am asked to review it formally on behalf of an agency, I have a conflict. I would ethically have to decline the formal review. The worst case is one in which the one asking for this service initially is actually trying to *create* such a conflict, guaranteeing that I could not be a formal reviewer - an attempt to take me out of the picture. I don't suggest that this is what is going on in this specific case, but it would put a cloud over the whole practice. So, I think that most scenarios are problematic. It is very different from the normal practice of reviewing papers, because reviewing is done on behalf of a third party: the journal or conference. And it is also different from informal cross-reviewing of papers within a circle of collaborators as it becomes a cooperative venture in that case. Upvotes: 1 <issue_comment>username_3: It does strike me that this is an improper request to make of active faculty at a different university, for reasons much as you say: funding is a competition, and you have your own people (and yourself) to look after. If you were retired, and disengaged from the university at which you'd worked, sure, why not take a larger view? "A rising tide lifts all boats", and all that. But, while you are "still in the game", it literally is that your competitors (or your students' or colleagues' competitors) are asking you for "insider information", in effect. Considering that universities' administrations visibly do not care so much about altruistic "greater good", but about external funding dollars, to consult externally in such a manner is obviously construable as sabotaging your own students, colleagues, department, and university. It is completely unsurprising that the reaction to your objection was on the order of "oh, come on, you're being a selfish jerk". Because they'd like to bully you into helping them, etc. No-brainer. Sure, if they can swindle people into helping their own competition, they have greatly succeeded, and will be happy. They have zero motivation to not do it, because (as you saw) they can package it as some mythical "greater good" thing, even while screwing over their competitors, whenever possible. Probably the larger point is that to engage with people embarked on such projects is futile and inevitably frustrating, because they've already made certain decisions, which often seem to include rationalizations about how exploiting/cheating other people is simply "good business" or is "clever" or ... something. So your only serious error is to engage with them and spend time and mental energy! :) Don't let opportunistic jerks fool you! Upvotes: 2 <issue_comment>username_4: You‘ve reached the correct conclusion, that you have no obligation to help researchers at another university improve their grant proposals and that you have better things to do with your time, but based on incorrect reasoning that it is unreasonable and abusive for the other university to ask for your help. It is reasonable (though probably pointless, as I suspect they will soon discover) for them to ask, and it is more than reasonable for you to say no. That’s all there is to the story really. As for the suggestion that they are “abusing the system”, to the very minor extent that there is an abuse here, it is a self-limiting type of abuse — a bit similarly to a beggar standing on a street corner asking for money. Some people may give them money, and that is their prerogative; you certainly don’t have to if you don’t want to. Upvotes: 1 <issue_comment>username_5: The odd aspect here, is that this request comes to you from grant support office of the other university, rather than the researchers themselves. It is perfectly normal to ask colleagues at other universities for feedback on your grant proposals. I regularly supply such feedback for colleagues. Why do this when there is a finite pot of grant money. * First of all, this is just being nice to people. Being nice has benefits of its own, in that people will be more likely to be willing to help you when you would need it in the future. * Second, typically the researchers you will be helping are part of the same (sub)field of research as you. By helping, them you are essentially helping your (sub)field to take a larger slice of the funding pot. There are many ways this can be helpful to you in the future. (Job/hiring opportunities, increased funding opportunities, etc.) Of course, this makes little sense if you yourself fishing in that particular funding pot. In which case, you have a good reason to deny such requests. The really strange aspect in this case, is that the request came from the grant support office at the other university. This almost completely negates the first list reason to comply with requests for feedback, as it diminishes the networking aspect. The second reason may or may not exist, but the fact the request did not come from the researchers directly implies that you did not already have a professional relationship with them, which reduces the likelihood that they are in the same (sub)field. Upvotes: 2
2019/10/10
815
3,152
<issue_start>username_0: I'm an American student currently earning a Master's in American cultural studies in Germany. Why study American culture in Germany? Firstly, I wouldn't have been able to afford a U.S. Master's in this field, and secondly I'd like to learn/improve my German with the ultimate goal of (possibly) pursuing a PhD in comparative literature. However, since classes have started here, I'm beginning to regret my decision for the following reasons: A) classes are incredibly large, direct contact with professors is scant, and so, among the obvious academic disadvantages, I'm ultimately worried I won't be able to get good letters of rec. B) even at the graduate level, many students don't take their studies seriously (probably because there isn't the pressure of student loans and because education is quite open here [a good thing]) C) classes are not as rigorous as they were in my B.A. literature program, and so I don't feel like I can grow as a researcher/scholar, which is my goal Does anyone have a similar experience applying to American PhD programs after doing a Master's abroad? Should I just stick it out and try my best to befriend some professors and make the best of my time here? At this point, I regret not applying directly to American PhD programs sans Master's degree since my undergraduate grades and professors were quite good. Thanks in advance!<issue_comment>username_1: > > A) ... I won't be able to get good letters of rec > > > Are you required to write a dissertation or conduct a large project or similar? If so, then the supervisor of that work will be well positioned to write a letter of recommendation. > > B) ... many students don't take their studies seriously > > > So what? > > probably because there isn't the pressure of student loans > > > Isn't there? How do students fund their studies? > > because education is quite open here > > > What does that mean? > > C) classes are not as rigorous as they were in my B.A. literature program > > > Was your previous university more prestigious? That may explain (B) and (C). Upvotes: -1 <issue_comment>username_2: You may have made a mistake, but it isn't a fatal one. You need a path forward. First, your undergraduate professors can still have credence in writing recommendations, but you need to keep in contact with them so you aren't forgotten. They can probably give you advice now on your current situation. I assume that you are early in your European studies. Abandoning and starting over in a US doctoral program might be an option, though perhaps as last resort. The B) issue can, perhaps, help you with the A) issue. Professors might be a bit tired of lazy students and could possibly appreciate an approach from a student who wants to do more, both helping with the letters of recommendation issue and your C) problem. But, it won't work if you are passive. You need to take charge. If one of your current professors does something interesting, you could make an approach and offer to help in some way. This solves all the problems. But don't just "stick it out". Take charge. Upvotes: 1
2019/10/10
2,416
10,225
<issue_start>username_0: I'm a graduate student in applied math (applied probability, asymptotic etc.). I have written several articles on either solving "small" problems, or developing particular methodology/techniques towards certain type of problems. I'm at a stage where I have lots of ideas and if I want to I may be able to work out the details and write a few papers based off of them. But none of these things are anywhere near significant much less ground-breaking etc. My advisor said a really good PhD is supposed to make ground-breaking contribution in just one problem/direction, instead of doing petty works in three and put them together to form a "pseudo-thesis". **I know what exactly what he is referring to but have no clue where to start to searching for those ground breaking ideas**. My advisor does not like to suggest ideas/problems to me because he said he doesn't know what would be ground-breaking either or else he would do it himself than gift it to me. As of now, all my ideas do not seem to carry much strength in the sense that they do not withstand much investigation or turn out to be "paraphrasing" classical paradigms. I suspect that I may need to probably read beyond my field because I reckon everyone in my field pretty much is familiar with the same set of literature and has a similar way of thinking. So they have exhausted great ideas that are possible under that mindset over the decades and only leave relatively small "fruits" for me to pick up. My advisor said it may be good to read quantum physics as he himself is reading that and sees "lots of potential" in cross-disciplinary investigation. It would be a big investment for me because I don't know nothing about physics beyond Newton's law. But I'm in my second year so there is still time for lots of things and I think I'm tempted by the idea of learning physics and do cross-disciplinary research, although I'm keenly aware of the risk. Is learning a completely new field a good pathway towards big discovery? **Should I take the risk as a PhD student?** How do people stumble upon big discoveries? Is there general guiding principles behind this? Thank you.<issue_comment>username_1: Have you ever taken a risk (scientifically)? Tried something not in books? Invented a method (even if it later turned out to not be novel)? Built a crazy device? Played around with chemicals? Tinkered with your computer outside of the usual routes - trying to get something under the hood to work? If yes, and some of it was successful (nobody ever succeeds in everything), go ahead and start something risky, such as what you describe as overlap of Quantum and probability. Else, you have not much experience with risk, so you probably should take it a bit more carefully. You have to learn whether you can trust your scientific instinct or not. One more suggestion: go to meetups or conferences. This is one of the best places to get fresh ideas from. Upvotes: 1 <issue_comment>username_2: What you are describing is certainly not good advising, unless you have cherry-picked some particularly negative quotes and taken them out of their context. It is true that monolithic and groundbreaking theses are regarded higher than "stapler theses" (3-5 papers combined into a thesis), but your reputation will most likely not hinge on your thesis -- and writing several papers early is most likely better for it than writing a really good thesis. (Keep in mind: You will be applying for jobs before your thesis is out! And while those jobs will not be permanent ones, it is still very important to catch a good one, since it will influence your ability to do good research in the postdoc period. In your application, a written paper counts more than an unwritten thesis, no matter how important the latter promises to be.) Another downside of "one big project" theses is that they are more likely to fail. If 1 of the 5 papers in your "stapler thesis" is revealed to be non-novel (and that happens in most active fields, particularly when different schools use different notation and don't communicate well with each other), your degree probably won't be delayed. If the main result of your monolith thesis turns out to be non-novel, then it may be a serious problem. And that's all assuming that you do win your bet and get that one really good result. I have written a "stapler thesis" myself (actually the worst kind, without even a Chapter 1 that connects everything; admittedly I do a lot of exposition in my papers). I have seen many "stapler theses" from MIT students who went on to do great work. There is no stigma in them; you can do better, but someone can always do better. As to this: > > My advisor does not like to suggest ideas/problems to me because he said he doesn't know what would be ground-breaking either or else he would do it himself than gift it to me. > > > The honesty is refreshing, but perhaps he should not be suggesting you to fend on your own then? Graduate students are usually not very good at recognizing which directions have promise and which are stale. This is an advisor's job. If the advisor cannot do that, he should then give the student something more concrete and incremental to do. > > My advisor said it may be good to read quantum physics as he himself is reading that and sees "lots of potential" in cross-disciplinary investigation. > > > This is a very dim and faraway lighthouse to steer towards. If your advisor is not offering you anything more concrete (at the very least, some reading, and not just introductory textbooks), then you can just as well ignore it for your thesis. Everyone and their dead cat knows that quantum physics is connected to probability; this is not exactly a hot scent. Upvotes: 4 <issue_comment>username_3: First, a general statement: personally, I think that it is much easier to think about grand things if you are enjoying your work, can immerse yourself in it, and do not have to think about meta problems, such as panicking about needing a result in order to get a degree. One immediate corollary that I take from this is that I like to start my students on something that is very very likely to yield a solid (not ground breaking) result in a reasonable time span, and will get them within epsilon of being able to write a thesis (I am in the UK, where students generally have less time for their PhD than in the US), while getting them to learn techniques and problems in an area where ground breaking stuff might be possible. But as others have said, the ground breaking stuff takes time and should not be expected to come before you submit your thesis. Solid work will be enough to land you a postdoc position. In light of this and of the first paragraph, it seems like a very bad idea to me to start trying to do something ground breaking when, if it fails, you will not have the material for a thesis, and I would recommend doing it in the reverse order. Even when you become an established researcher, you are unlikely to be able to afford to sit there for 5 years working on a breakthrough without producing visible outputs in the meantime. It is therefore important to learn to maintain a steady output of good work while you are working on grander things. Since it is completely unpredictable how long the grander things will take or whether they will materialise at all, you should decouple that part from your immediate goal: getting a PhD, which is what the steady output of good work will grant you. One last thing that you cannot do much about, but that might put things a bit into perspective: being hands-off as a supervisor is not at all the same as outright saying that they will not share their good ideas with you. I have seen some fairly hands-on and some completely hands-off supervisors, but almost all good supervisors I have seen (in pure maths) are prepared to generously share their best ideas with their students; and if they are not, then they just don't take a student. Upvotes: 2 <issue_comment>username_4: My candid advice: 1. Keep "stamp-collecting"; do the "snowball" (packing snow together) thesis when you decide to leave. Don't let your advisor tell you what to do. It's VERY easy for tenured profs to say things like he did, but he has completely different risk/reward. Burning out some non-superstar advisees is like a who cares to these guys. But you care if you're the butterfly dying. 2. Keep your eyes open. Some time insights come when you are "grinding the pigments". Keep a notebook (or file folder full of cocktail napkins) with your ideas. Don't worry about differentiating small from big. Just collect them. If anything at least it gives you a parking lot piece of mind. And sometimes something comes of it after the napkins sit in the folder a little bit. Leave yourself freedom to "brainstorm" (open the goofy part of the brain, versus the editor part) at least as it comes to the idea list. 3. Spend some certain amount of time (~1/week, 20%) doing "bootleg" stuff. Just something totally different that might not work, etc. [Somehow find a way to write up all the bootleg fiddling around, even as another stamp. But that is way down the road. While messing around, put no pressure on yourself and play. 4. Don't dive down the physics rathole. That's your advisor's idea. It doesn't sing to you. Get something of your own. Plus, it's kinda not news that physics has math applications and the physickers all think they are better at it already. (Not true, but you know how they can be.) 5. Instead do something in petroleum geology, oil well completion, or even oil well economics. It's a huge area. With both existing status (not lock blockchain) AND dislocations/changes creating opportunities. Target rich and odds in your favor (like being the only guy in a power yoga class). Not like everyone trying to be the next Google or Amazon idea guy. Very hard physical problems (e.g. three phase flow). And they have lots of money--yes, even when they say they don't, they do. Plus it's of high interest to both supermajors and Halliburton/Schlumberger (best to collaborate with). And strategically of interest to the PRC. [Just building some option value.] Upvotes: 2
2019/10/10
1,871
8,353
<issue_start>username_0: If I use an auto-grading script and allow students to see their would-be grade based on the auto-grader, is it wrong to change their grades after the due date if I decide to change the auto-grader? I'm curious what others think because on the one hand students could go back and change their submissions as many times as they liked prior to the deadline, and many students took advantage of this to do as well as possible. Now that I've changed the script many students now have a zero (including a few who previously scored perfectly). On the other hand, the issue that these students had was a result of not following an instruction I gave out, which I overlooked while writing the original auto-grader. So is it wrong to change the grades or am I justified?<issue_comment>username_1: If any student feels disadvantaged by this, then you will have an uproar and complaints to administration. You describe a system in which they depended on the actual thing you built not some "instruction" you gave. I would guess that you are stuck with the thing for this group of students, both practically and ethically. Otherwise grading will seem chaotic and unfair to them. Your reputation might suffer both with them and with your boss. This sounds more to me like a case of releasing untested or insufficiently tested software. One way out would be to void the results of the autogravder for anyone disadvantaged and give them one additional attempt to submit. But that could also be a problem because of time constraints on other things in your course and in their other courses. Live and learn. But first, be fair. Upvotes: 6 <issue_comment>username_2: I guess whether it's strictly speaking "wrong" depends on how you communicated before (i.e., could a reasonable student interpret your grading script as part of the assignment spec, or the ground truth about how your instructions are to be understood?). *However*, pretty much independently of how you communicated, you **will** get a lot of backlash if you provided a grading script that students could use which gave them full points, and for the actual grading you change the script so that the same solution gets 0 points. Students will not be pleased, they will complain, and more likely than not they will have a case to do so. In essence, think of it that way - if you manually "pre-graded" solutions and told groups that everything is ok, would you then consider it fair to change your mind and give the same groups 0 points? If not, is it so different if the grading is done through an automated script? Let the grades stand for this year, and improve your grading toolkit for the next iteration. Upvotes: 3 <issue_comment>username_3: You seem to have put yourself somewhat between a rock and a hard place. The rock is when you keep the auto grader at its previous value. This approach disrespects your intent to have performance based on students doing exactly what you ask. The hard place is when you change the auto grader to the newer value. This approach portrays that your grading metrics are subject to last minute changes *even when those changes are likely fully justified*. YOU made a mistake. Admit it and don't punish students for it. Do not change your auto-grader AFTER the grades have already been posted. This approach will leave a sour taste for everyone all around. Consider, do you really want to give the impression that students have to be in competition with each other for a better grade when you make a mistake? When you really feel that you need to rebalance the grades in some way, offer an opportunity for extra credit on the assignment. But make that opportunity open only to students who did exactly what you asked in the first place. Upvotes: 2 <issue_comment>username_4: > > 1. Now that I've changed the script many students now have a zero (including a few who previously scored perfectly). > 2. ... because on the one hand students could go back and change their submissions as many times as they liked prior to the deadline > > > I don't see how (2) matters here. Here's what's effectively happening here: If I was in your class, I submitted an assignment, and you told me that it's perfect, I obviously wouldn't revise it later. You can't at the last moment decide that you didn't do the right marking and give me a zero. *That's not how it works.* If you'd told me beforehand that the work needed to be revised, I would have done so. **But you can't say that you're giving me a zero *after* telling me it's a perfect score, and *after* the deadline, and not allow me to revise it.** **If you go through with changes to the auto-grader, your students will lose any respect they have for you. As will your colleagues, probably, since they'll think that you tried to take the easy way out by writing an auto-grader and then messed it all up.** Upvotes: 6 <issue_comment>username_5: My suggestion is to: * Give them a grade that matches the initial auto-grader. * Also tell them that there's something in the requirements that the auto-grader failed to check for. * As part of the next assignment, give them a task that builds upon the current one and can only be accomplished if they also meet that specific requirement you initially didn't check for. * Make sure that your updated auto-grader checks for every requirement you care about in their submissions, including the one you previously forgot to include and the ones that pertain to the new task. Students who didn't follow that specific instruction will now have to update their solutions accordingly, before implementing the new task. This approach doesn't punish anyone for failing to meet a requirement that your auto-grader didn't enforce and offers everyone the same opportunity to learn and to prove that they master the topic of the assignment. Upvotes: 3 <issue_comment>username_1: This is more of an extended comment on the effect of auto graders and especially problematic ones. First an example that is a bit extreme, just to make the main point. Suppose I give a multiple choice test that is auto graded but also permits the student to take the exam multiple times and only submit one of them. What does that emphasize? In fact, I doubt that it promotes learning at all, and emphasizes that the teacher wants *answers* even if at the expense of *learning*. The student is encouraged to just guess and re guess until they "get it right". The problem is not just the obvious one, but the fact that such a system that seems to require work from the student provides no actually useful feedback to them on whether they have learned the proper lesson. Student work should always come with effective feedback, especially when the student has made errors. The feedback should attempt to give the student an idea about where their thinking when astray so that they can do better in the future. I can't claim that the situation of the OP has these characteristics, so haven't made this a proper "answer" here. But it is something to think about for anyone wanting to incorporate such a thing into an educational system. Think beyond the obvious about what unintended consequences there might be. Do this prior to implementation, of course. Education requires reinforcement and feedback. Make sure that your system provides both. And the feedback needs to be effective, not just "yes" or "no". Upvotes: 0 <issue_comment>username_6: If a specific test case was marked as passing or failing in the sample validation, it should still be marked the same in the final way that you grade. Ideally you would also have text explaining that the sample validation only tests SOME test cases and the final grade will be based on a much larger set of test cases or criteria, but I suspect it's a moot point in this specific case because it sounds like you changed the result on some of the existing tests. (However, you should include that discussion for future assignments.) I'm using the term sample validation because your "auto-grader" that you give to your students should never be the full grading criteria. If nothing else, a student could turn any assignment into a map that takes the input from the auto-grader and returns the correct output by "magic", which would be acceptable if that was literally all of your grading criteria. Upvotes: 0
2019/10/10
1,821
8,055
<issue_start>username_0: I am pursuing a master part time in my home country. My job is sending me to Houston for 2020, so I will have to pause my studies. I wouldn't like to stop studying next year so I wanted to know if I could take one or two courses at a Houston university given I comply with the academic requirements. Hopefully, I can get credits back home but that's not a requisite. Still, I would like to know if I would receive an official proof for having passes the courses.<issue_comment>username_1: If any student feels disadvantaged by this, then you will have an uproar and complaints to administration. You describe a system in which they depended on the actual thing you built not some "instruction" you gave. I would guess that you are stuck with the thing for this group of students, both practically and ethically. Otherwise grading will seem chaotic and unfair to them. Your reputation might suffer both with them and with your boss. This sounds more to me like a case of releasing untested or insufficiently tested software. One way out would be to void the results of the autogravder for anyone disadvantaged and give them one additional attempt to submit. But that could also be a problem because of time constraints on other things in your course and in their other courses. Live and learn. But first, be fair. Upvotes: 6 <issue_comment>username_2: I guess whether it's strictly speaking "wrong" depends on how you communicated before (i.e., could a reasonable student interpret your grading script as part of the assignment spec, or the ground truth about how your instructions are to be understood?). *However*, pretty much independently of how you communicated, you **will** get a lot of backlash if you provided a grading script that students could use which gave them full points, and for the actual grading you change the script so that the same solution gets 0 points. Students will not be pleased, they will complain, and more likely than not they will have a case to do so. In essence, think of it that way - if you manually "pre-graded" solutions and told groups that everything is ok, would you then consider it fair to change your mind and give the same groups 0 points? If not, is it so different if the grading is done through an automated script? Let the grades stand for this year, and improve your grading toolkit for the next iteration. Upvotes: 3 <issue_comment>username_3: You seem to have put yourself somewhat between a rock and a hard place. The rock is when you keep the auto grader at its previous value. This approach disrespects your intent to have performance based on students doing exactly what you ask. The hard place is when you change the auto grader to the newer value. This approach portrays that your grading metrics are subject to last minute changes *even when those changes are likely fully justified*. YOU made a mistake. Admit it and don't punish students for it. Do not change your auto-grader AFTER the grades have already been posted. This approach will leave a sour taste for everyone all around. Consider, do you really want to give the impression that students have to be in competition with each other for a better grade when you make a mistake? When you really feel that you need to rebalance the grades in some way, offer an opportunity for extra credit on the assignment. But make that opportunity open only to students who did exactly what you asked in the first place. Upvotes: 2 <issue_comment>username_4: > > 1. Now that I've changed the script many students now have a zero (including a few who previously scored perfectly). > 2. ... because on the one hand students could go back and change their submissions as many times as they liked prior to the deadline > > > I don't see how (2) matters here. Here's what's effectively happening here: If I was in your class, I submitted an assignment, and you told me that it's perfect, I obviously wouldn't revise it later. You can't at the last moment decide that you didn't do the right marking and give me a zero. *That's not how it works.* If you'd told me beforehand that the work needed to be revised, I would have done so. **But you can't say that you're giving me a zero *after* telling me it's a perfect score, and *after* the deadline, and not allow me to revise it.** **If you go through with changes to the auto-grader, your students will lose any respect they have for you. As will your colleagues, probably, since they'll think that you tried to take the easy way out by writing an auto-grader and then messed it all up.** Upvotes: 6 <issue_comment>username_5: My suggestion is to: * Give them a grade that matches the initial auto-grader. * Also tell them that there's something in the requirements that the auto-grader failed to check for. * As part of the next assignment, give them a task that builds upon the current one and can only be accomplished if they also meet that specific requirement you initially didn't check for. * Make sure that your updated auto-grader checks for every requirement you care about in their submissions, including the one you previously forgot to include and the ones that pertain to the new task. Students who didn't follow that specific instruction will now have to update their solutions accordingly, before implementing the new task. This approach doesn't punish anyone for failing to meet a requirement that your auto-grader didn't enforce and offers everyone the same opportunity to learn and to prove that they master the topic of the assignment. Upvotes: 3 <issue_comment>username_1: This is more of an extended comment on the effect of auto graders and especially problematic ones. First an example that is a bit extreme, just to make the main point. Suppose I give a multiple choice test that is auto graded but also permits the student to take the exam multiple times and only submit one of them. What does that emphasize? In fact, I doubt that it promotes learning at all, and emphasizes that the teacher wants *answers* even if at the expense of *learning*. The student is encouraged to just guess and re guess until they "get it right". The problem is not just the obvious one, but the fact that such a system that seems to require work from the student provides no actually useful feedback to them on whether they have learned the proper lesson. Student work should always come with effective feedback, especially when the student has made errors. The feedback should attempt to give the student an idea about where their thinking when astray so that they can do better in the future. I can't claim that the situation of the OP has these characteristics, so haven't made this a proper "answer" here. But it is something to think about for anyone wanting to incorporate such a thing into an educational system. Think beyond the obvious about what unintended consequences there might be. Do this prior to implementation, of course. Education requires reinforcement and feedback. Make sure that your system provides both. And the feedback needs to be effective, not just "yes" or "no". Upvotes: 0 <issue_comment>username_6: If a specific test case was marked as passing or failing in the sample validation, it should still be marked the same in the final way that you grade. Ideally you would also have text explaining that the sample validation only tests SOME test cases and the final grade will be based on a much larger set of test cases or criteria, but I suspect it's a moot point in this specific case because it sounds like you changed the result on some of the existing tests. (However, you should include that discussion for future assignments.) I'm using the term sample validation because your "auto-grader" that you give to your students should never be the full grading criteria. If nothing else, a student could turn any assignment into a map that takes the input from the auto-grader and returns the correct output by "magic", which would be acceptable if that was literally all of your grading criteria. Upvotes: 0
2019/10/11
820
2,914
<issue_start>username_0: How can I find a more recent review article in a topic I am interested in? I have found a review article on biosynthesis of silver nanoparticles published in 2015. (<https://doi.org/10.1016/j.indcrop.2015.03.015>) How do I go about finding more recent work done on the same?<issue_comment>username_1: In my field, 2015 is considered quite recent and it would be unreasonable to expect there to be much in the way of "more recent" work. However this may be different in your field. One place to start would be to use Google Scholar to search the keywords associated with the topic you are interested in, and to restrict the searches to publication after 2015. A more labour intensive but, perhaps, more targeted way forward would be to specifically seek out the work of influential scholars cited in your 2015 article and see if they have published more on the topic since that date. Finding sources of information is not easy. Expect to spend some time on this. Upvotes: 2 <issue_comment>username_2: In addition to Google Scholar and searching for work by influential scholars, as suggested by username_1's answer, it is often helpful to look at the works that have cited the reviews you have found already. Frequently, new reviews will cite previous reviews of the topic, especially in fast-moving fields, so as not to waste space describing the same information. The review article you linked has 100+ citations, and maybe one of those is by a newer review. Also, if you can find a landmark or high-impact article in the field, it is almost certainly cited by all good reviews, so looking at the "cited by" list of such an article is often helpful too. Upvotes: 4 [selected_answer]<issue_comment>username_3: I recommend that you use Google Scholar to conduct a [forward citation search](https://libguides.fau.edu/c.php?g=325509&p=2182112). The basic logic of this strategy is that any decent literature review conducted since 2015 should surely cite this 2015 literature review. This logic is surpisingly sound under a system of peer review because even if the authors of the new review somehow missed the 2015 review, competent peer reviewers should alert them to its existence and ask them to include it. So, based on that logic, here's how you would do it: 1. Look up [the article in Google Scholar](https://scholar.google.com/scholar?q=Plant%20extract%20synthesized%20silver%20nanoparticles%3A%20An%20ongoing%20source%20of%20novel%20biocompatible%20materials&hl=en&btnG=Search&as_sdt=1%2C5&as_sdtp=on). 2. Click the "Cited by [numer of citations]" link: [![enter image description here](https://i.stack.imgur.com/8sHPc.png)](https://i.stack.imgur.com/8sHPc.png) 3. Browse through the 193 results (as of October 13, 2019) to see if there are any new literature reviews. That strategy should exhaustively identify any new literature reviews from now and the future. Upvotes: 2
2019/10/11
1,335
5,374
<issue_start>username_0: I am a Material Science and Engineering major.What I am having doubt with is how I am supposed to write long answers.Lets say someone asks,"What is a Lomer Cottrell barrier?" or may be,"What is solid solution strengthening?"Actually I write answers to these but those answers never fetch me a more than 50% marks.How do I get through?Please help.<issue_comment>username_1: The only person who absolutely knows what they are looking for in your answers is your instructor. You need to talk to them about this. If they have office hours, visit them during office hours. If not, email to politely schedule a meeting and bring this question to them. Upvotes: 4 <issue_comment>username_2: As @username_1 said, go ask the teacher what s/he wants in an answer and the format of it. The length, how many references (you are adding references, right? in APA? from veritable sources?). It also helps if you revise how you write. Given your field some tutorial/books about technical writing could be useful. In the first paraghph you give the concrete answer and then you explain the details in the next paragraphs. max 5 paragraphs and .5- 1 page should be enough. For docs on technical writing: * <https://stackoverflow.com/help/how-to-ask> * <https://codeblog.jonskeet.uk/2010/08/29/writing-the-perfect-question/> * <https://stackoverflow.com/help/how-to-answer> * <https://meta.stackexchange.com/questions/7656/how-do-i-write-a-good-answer-to-a-question> * <https://msu.edu/course/be/485/bewritingguideV2.0.pdf> * <https://medium.com/technical-writing-is-easy/style-guides-for-technical-writers-72b011f84c4b> * <http://site.iugaza.edu.ps/mahir/files/2017/01/Handbook-of-Technical-Writing-9th-Edition.pdf> * <https://developers.google.com/season-of-docs/docs/tech-writer-guide> Upvotes: 0 <issue_comment>username_3: Your question, and a comment you posted, suggest a misguided belief that “the art of writing long answers” in the context of an exam or homework assignment is somehow different than the general art of writing (and the even more general art of communicating your thoughts to others clearly and efficiently). Sorry, but that’s not true, and you’re looking for help in the wrong places. I, and everyone posting here, have never taken a class or read a manual on “writing long answers”. To write long answers well, you need to write well, and that’s something that a stack exchange answer can’t teach you. There are many books about it, but as people on the sister site writing.stackexchange like to say, the best way to develop good writing skills is to write, write, and write some more. Over time you will see your skills improve with practice. Second, my experience with science and engineering is that the focus in these areas is usually on answers that are conceptually correct and show a good level of understanding of the material (even if they are not especially well-written or phrased) rather than on the quality of the writing itself. So you should also consider the possibility that if you’re losing 50% of the points or more on your answers, that may be because you don’t actually understand the material as well as you think you do (and as well as you should) - this may not be a writing issue per se. Even the best writer in the world will not be able to write a good answer to a question on a topic they don’t understand well. As others have said, it would be a good idea to review the feedback you get from your instructors and if necessary ask for additional feedback, to try to develop an understanding of where exactly the problem lies. Good luck! Upvotes: 3 [selected_answer]<issue_comment>username_4: Looking at your questions on the chemistry forum (from your profile), I get the impression your issues are much more from knowing the material than from exposition (writing ability). Although there's definitely an English language issue as well (if you are taking tests in that language). My advice is to work more problems and get help with teachers (reviewing your drill problems). Basically what you are doing on the forums here, but with more intensity. Do homework problems until you puke and then wipe off the puke and do some more. Look into getting extra problem sets (with solutions or at least answers, so that you can check your work). Get help for questions you don't understand--good that you are actively using the chem stack exchange. But you may want to get in person help as well. Also, perhaps you need to work more easy problems if you are struggling with all the hard problems you're showing on the site here. (Like working on my double back flips to get them perfect, if I'm having a problem with triples.) In particular, many of your questions revolve around the trickier types of stoichiometry problems, especially in an applied (therefore complicated word problem) setting. The problems you're mostly showing seem to have some complication around them, but at their core they use freshman chemistry insights. I have seen EXTREMELY poor English speaker/writers, still able to do well in classes like material science or inorganic chemestry. Even to ace the material. So, I really suspect your issue is more one of the subject matter content than of writing ability. Yeah, you need work on that too, but it's not the "limiting reagent" of the getting you a good grade reaction. Upvotes: 0
2019/10/11
666
3,076
<issue_start>username_0: Some questions before said that in US/Canada, we should apply first then select the professor and it's different from the system in UK or EU. So, what about Australia or New Zealand?<issue_comment>username_1: From my experience in Australia, the usual scenario is that a student makes contact with a particular academic at a university. The academic agrees to supervise the student's PhD. The applicant then applies through the official university process. The university process will involve additional hurdles (e.g., grades, English language requirements, past experience writing research theses, etc.). But in general, the applicant starts by building a relationship with a potential supervisor. That said, some applicants get in contact with the university with an outline of their research interests, and then the university can facilitate finding a supervisor. And of course, I'm sure there is some variability in the process across universities. So of course, it makes sense to check out the university websites and get in touch with the HDR office of any relevant universities. Upvotes: 2 <issue_comment>username_2: The usual practice in New Zealand is that PhD admissions are **formally** carried out at the university level and not by individual staff or even individual departments. It is common to be in contact with prospective supervisors in advance of applying, and *sometimes* required, but they usually cannot actually accept a student entirely of their own choice. It is, however, required that there be a willing supervisor before a student is accepted. A student can begin the process without a supervisor arranged, and the relevant department might find and assign one. Whether that happens before or after the formal application varies, but the effect is generally that the student will be interviewed by one or more of the prospective supervisors before an admission decision is made: * At my institution, the application form asks for the names of any staff you've been in contact with, and if you leave that blank the departmental committee enquires with staff who seem relevant about whether they'd like to follow up. * At others, you enquire informally with the university and they collate information to send on to the department, and only afterwards do you apply. In either case, forming an existing connection with a willing prospective supervisor before applying means that's likely to be your supervisor if you're accepted. Funding decisions are usually made by layers of committee, in parallel or separately to the acceptance process. In the (less common) case of grant-funded PhD studentships that can be sidestepped and the supervisor has more sway. An accepted student will usually be assigned a primary supervisor from the start, but sometimes there may be a brief initial period where that is uncertain. It is possible to change supervisor after starting, but it's not usual that there's an unsupervised period after arrival where you're meant to find a supervisor like there is in some places. Upvotes: 1
2019/10/11
1,188
4,944
<issue_start>username_0: I am a PhD student works in theoretical computer science(Algebra). I have been working on my PhD project from last five years. I am very much interested in my work. During my Ph.D there were not collaborator were there other than my research supervisor. I struggle a lot during my Ph.D as problem finding was very tough, solving also and then writing. I even did not find the course work which I did any helpful. I was also not allowed to work outside my university I don't know why. Now at the end of my Ph.D I come to know the value of collaboration. I have studied most of things in Ph.D. at my own. I come up with results and written those with the help of research supervisor. I have worked very hard in my Ph.D. despite of that I am not publish a single paper in top tier. All of my research paper are in low tier. I will be graduating soon. According to my research supervisor, PhD is to train a student not to publish something in top tier. This is true for all of his Ph.D students that they publish in low tier. I am worried about post-doc and Job after my Ph.D as many of the students in my field are publishing at top tiers. I can publish research paper at my own but not in top tier. **Question :** Is it okay to publish your entire work of PhD in low tier?<issue_comment>username_1: First, I'm not sure where to draw the line between top and low tier journals. You want to publish in an appropriate journal at the higher rather than the lower level. But it is more important to realize that your career doesn't begin and end with your doctoral research. It is normally only the first of your published works, not the best, and certainly not the totality. Some people are lucky in their studies, having been given a hard problem and having made significant progress on it. Others have done more modest work. But, there is actually more valid doctoral research than the top tier journals could possibly publish. So, it isn't something to worry about. But seek out interesting problems and work hard to solve them. Get as much published as you can and in the best journals you can manage. Your career will grow. It doesn't start out fully formed. Upvotes: 2 <issue_comment>username_2: Back in the old days, it was quite common for mathematics and theoretical computer science PhD students to graduate with *zero* publications. Their research contributions, while valid, were not significant enough to be published. This isn't so true now, principally because there are many more low-tier conferences and journals to publish low-significance results, so the bar of significance to be published has gone down. As far as the goals of a PhD are concerned, as long as you have done enough research, it's a valid PhD. However, the job market is not so friendly. It's useful here to bring out the sports analogy. Getting a permanent academic job is like getting to play on a major league baseball team (or a first division football team). Getting a PhD is like becoming competent in a minor league (or lower division) team. Not everyone who starts out playing professional baseball (or football) makes it to the top, and you need talent and luck as well as hard work. Most people who finish a PhD don't have the right combination of talent and luck to land a permanent academic job, just because there are fewer jobs than people with PhDs. In the current job market, in order to land a permanent academic job where research is a significant portion of your duties, you need research results that can be published in a higher-tier venue. If you don't get them as a grad student, you might get them as a postdoc, and it even happens that people get them doing research nights and weekends while working at a non-academic job, or a teaching-only academic job. But, just as the sports analogy suggests, while there are things you can do that will help your chances of making it to the major leagues, there is nothing you can do that will guarantee (or even make it highly probable) you make it to the majors. Upvotes: 4 [selected_answer]<issue_comment>username_3: > > I am worried about post-doc and Job after my Ph.D > > > Question : Is it okay to publish your entire work of PhD in low tier? > > > Yes, it is okay. As you are nearing completion, you need publications to get hired. If the alternative to publishing in a lower tier is not publishing at all, then you are far better off getting something in print in a good peer-reviewed journal with a respectable impact factor, even if it is not your first choice. The harder question is *how much* of your doctoral work needs to get published at this stage. You can publish some of it while saving some to rework, polish, and submit later, with better hope for an outstanding publication. Ask your research supervisor for their advice on how much is enough. That said, I strongly agree with @username_1's advice, which I up-voted. Upvotes: 0
2019/10/11
380
1,708
<issue_start>username_0: I have a paper that was accepted at a conference and in publication of conference special issues of a journal. The acceptance email stated that we will be contacted by the journal editorial office for further actions regarding the publication. At the time of acceptance the status was “editorial assessment” But this changed to “under review” again about a week after. Now the status changed to “reviews received”. Why could this happen? It was already peer reviewed before. I am confused and anxious and it will get rejected. Any thoughts?<issue_comment>username_1: It seems to me that the conference committee accepted your paper to be presented. The related proceeding is then subjected to its own revision according to the journal standards. This is what I get from your Q - in which is not that clear who has accepted what - and it is a common work-flow for many conferences when the proceedings are to appear in a journal, albeit in a special issue of it. Alternatively, the editor at the journal could have decided to consult one more referee. This can happen, of course. But in this situation you shouldn't have received any acceptance letter from the journal, yet, but only that coming from the conference organisers and regarding your actual presentation at the conference only. Hope it helps and good luck. Upvotes: 3 [selected_answer]<issue_comment>username_2: This probably hasn't happened in your case - but technical glitches do occur. It is possible for an acceptance email to be issued by mistake: Human error or software bug/glitch. The main (and perhaps only) thing to do IMHO is to contact the program committee chairs or contact-person and ask. Upvotes: 1
2019/10/11
1,320
5,455
<issue_start>username_0: I am a physics master's student hoping to pursue physics research by doing PhD, post-doc, etc., following the standard path. To put it straight, how far can sheer hard work take one in academia until it becomes impractical and affects one's personal life?(in terms of research quality and quantity) To anyone who is interested to know why I ask this, please read further: Since high school, I have been working super hard to succeed and have managed to reach my goals and actually do things which I didn't think was possible(personally) a few years back. I am in one of the best places in my country to study physics, have (what I consider) good grades and some great research experience through which I even got to travel to different country. However, this has been mainly due to my hard work and my professors/guides looking at my hard work doing favors by giving great recommendations, calling up other professors for me to work with etc. I have almost consistently put in at-least 10hours per day(including weekends) since my high school and there is literally no way I would achieve any of this without putting in this much time. Until a few years back I thought this is the way but now I starting to reconsider. I am starting to believe this is impractical in the long run in research where I put in over 12hours of hard work with moderate intelligence. I have seen some of my classmates who are obscenely smart and the only way I can even come close to their level(in terms of grades) is by sitting in library the whole day. I do not know how far this hard work can take me and even if I it does is it practical. But if I stop putting this much work my grades will definitely go down and I would be letting down my professors. But to do well I feel like I have to put more effort the more I go forward.<issue_comment>username_1: I can't say whether 10-12 hours per day is right for you, but caution you to consider your health. If it suffers, then everything will suffer. But "intelligence" alone is overrated. *The path to Intelligence, actually, runs through Hard Work.* I once got the results of an IQ test (I hope they don't do this any more). The printed results said that I was a bit above average, but not outstanding. They recommended that I set my goals at community college rather than university as I'd be more likely to succeed with more limited goals. About fifteen years later I earned my PhD in mathematics. So much for prediction. More about me. When I was young I was pretty smart but was disengaged from school. My life was elsewhere. I saw no point whatever in school. It was in the second or third year of secondary school that I had what I considered my first positive educational experience. But then, I started to work. And I worked hard. But, you need to use the hard work to seek insight into what you are studying. The work, itself, isn't enough. But the insight is very unlikely to come to most people without the hard work. Again, though, don't neglect the wider picture. Get enough sleep. Get enough exercise. Seek feedback on what you do from those who know more. Find a mentor (or three). Ask a lot of questions. Take a lot of notes. And don't try to work past the point of frustration. The work will be very inefficient. Take a break for a bit. Then come back to it. Upvotes: 7 [selected_answer]<issue_comment>username_2: > > Is success due to hard work sustainable in academic research? > > > Yes. In fact, it's the only kind of success that is sustainable for any difficult activity. It's not necessary to work 10 hours per day. Most people who think they are working very long hours are not working efficiently. Figure out the hours that are most efficient and stick to that. Upvotes: 5 <issue_comment>username_3: Postdoc in math here (I used to be a grad student). I just try to put in 4 hours per day while leaving some time off. That is already a lot when balancing other obligations like self-care. At some point you just have to adopt a sort of fatalistic attitude. Connections matter a lot and working on topics that appeal to potential employers matters a lot. I think when it comes to employment it is a deal-breaker if you do fantastic groundbreaking work that doesn't appeal to the faculty. In that case it doesn't matter how hard you work. Upvotes: 3 <issue_comment>username_4: I am of the opinion that there are two “personality traits” which are more useful for researchers than “intelligence” (whatever that means to you) or “hard work”. The first is perseverance; the second is decisiveness. One must be willing to endure the ups and downs of research and the PhD life – seeing boring projects to the end, doing the nitty-gritty coursework you may find tedious – but one must also be willing to act quickly when a project is worth abandoning or removing oneself from; and must fully commit to any task will which take a substantial amount of time. **Thus, hard work is mostly useful if applied in a thoughtful, committed way**. Simply spending large amounts of time “working” could have no value – I could spend an endless amount of time perfecting things which only incrementally improve my work. I'm a junior faculty at an R1 and feel my most productive colleagues work much more thoughtfully, and probably less time overall (or seem to, at least), than those (including myself) who are less productive despite working longer hours. Upvotes: 3
2019/10/11
708
2,619
<issue_start>username_0: I am interested in generating citation-metrics for [The journal of the British Association for the Study of Religions](http://jbasr.com/basr/diskus/index.html) using Google Scholar or such tools. (The journal is not indexed in almost any selective database, compounding my woes.) Is there a way of doing this?<issue_comment>username_1: To gain citations for the articles published within JBASR, first you must get read. Have both the authors and the journal board do their PR thing. Have them use all the social media, Twitter, LinkedIn and such; use all the sites like Google Scholar, ResearchGate, Academia.edu, Mendeley etc. - have all the authors make profiles there and have them make their papers available. Ask authors to refer to both articles within JBASR as well as in priority journals. Use the editorial in every issue to properly cite certain articles within that issue and previous issues. Think about what it is exactly that you mean to accomplish. Upvotes: -1 <issue_comment>username_2: Not a full answer, but I just want to point out that you can to some extent retrieve citations from, for instance, Web of Science (WoS) to journals that are not covered by the citation index per se. For instance, by doing a "Cited reference search". There you will find citations to documents, based on the citation information given in reference list of WoS publications, irrespective of whether the "target" publications (the ones receiving the citations) are covered by WoS. For instance, when searching for "BRIT ASS STUDY RELIG" (a generalization of "The journal of the British Association for the Study of Religions", which seems to be found in WoS) as "Cited work" I find a ten citations in total to the journal. This is probably not completely what you are after, but might be useful to know. Upvotes: 1 <issue_comment>username_3: Use *Google Scholar* and search for `source:"JOURNAL NAME"`. --- In your case, however, [source:"Journal of the British Association for the Study of Religions"](https://scholar.google.com/scholar?q=source%3A%22Journal%20of%20the%20British%20Association%20for%20the%20Study%20of%20Religions%22) yields **0** results. Using the abbreviation [source:"BASR"](https://scholar.google.com/scholar?q=source%3A%22BASR%22) does lead to more useful results (**37** in total), albeit you need to disambiguate the Journal of the BASR (DISKUS) from other publication outlets from BASR. The most fruitful query for your purposes seems to be [source:"DISKUS"](https://scholar.google.com/scholar?q=source%3A%22DISKUS%22); it yields **256** results. Upvotes: 0
2019/10/11
445
1,974
<issue_start>username_0: Zenodo and ResearchGate can assign a DOI to an article. After publishing an article on Zenodo or ResearchGate with a DOI, if the article is good, will the article still be accepted for publication by peer-reviewed journals? Assume that the journal in question allows green open access (i.e., posting of a preprint on a public site).<issue_comment>username_1: To "publish" something really just means "make it available to others". So, there are many ways to publish something: You can strive to get it accepted in the most prestigious journals of your field, or you can just put it on your website -- strictly speaking, both qualify as "publishing". Putting an article on Zenodo, ResearchGate, or arXiv is closer to the latter than the former; the fact that some of them can give you a DOI really just means that you make it easier for others to find and reference what you uploaded. From a practical perspective, the difference between all of your options is how much vetting and endorsement the locations where you publish provide for your work. Academic journals *review* your article, and this provides a level of endorsement that lets others trust that what you published is useful and correct. On the other hand, putting an article on a web site or ResearchGate involves no vetting and, consequently, does not bestow on you the *prestige* that comes with the vetting. If you wanted to use your "publications" as part of a job application, it is this *prestige* you want, and consequently, putting your work on a website of your own will not be enough. Upvotes: 2 <issue_comment>username_2: You need to check with the individual journal(s) you are interested in eventually submitting to. In my experience, most have posted policies somewhere in their "instructions for authors" sections. Most likely the presence of a DOI is completely irrelevant, but there isn't a general rule on what publishers will consider prior publication. Upvotes: 0
2019/10/12
558
2,350
<issue_start>username_0: Is it possible that an associate professor in one university be designated as assistant professor after a year in the same university? What could be the consequences of it?<issue_comment>username_1: Caveat: This is a view from the US. In other countries with different traditions things may differ. I think that would be rare. It might be possible (unlikely) if the person changed fields and departments and needed to start over. I've never heard any instance of it. An associate professor in the US has tenure. An assistant does not. In order for such a thing to work the person would have to give up tenure, which would be very unlikely. There can hardly be any incentive to do so, other than a change of field/department. If it were forced, the consequences would likely be a lawsuit as it is a civil, not criminal matter. If it were by mutual consent then there are no consequences. But switching universities is a different matter. It is possible to give up tenure and the rank of associate professor at one place and become an untenured assistant professor at another. But the incentives would need to be pretty strong for a person to want to do this. More commonly a switch would retain the rank but with a probationary period built in, normally a small number of years, before tenure is granted. --- Additional caveat. There are a few places in the US that do not have a system of tenure at all. People normally serve on multi year contracts that don't automatically renew. In such a place pretty much anything is possible and there are few consequences other than higher turnover. See, for example: <https://eric.ed.gov/?id=ED424814> Upvotes: 0 <issue_comment>username_2: Within *one* university this would be a demotion so it is possible but rare, and could happen if the appointment and ensuing titles were given by courtesy, i.e. the appointment is not a regular full-time appointment. Rank associated with this courtesy appointment might then be tied to some specific commitments. I’ve never seen this happen for a regular position: I would guess that such demotion of a regular faculty member would lead to dismissal or resignation. For courtesy appointments my experience is that these are either renewed or not, but I know of one place where such a “demotion” is technically possible. Upvotes: 3
2019/10/12
1,201
4,812
<issue_start>username_0: Are there any drawbacks to dropping out of school and completing a GED instead of finishing and getting a diploma? I am in the midwest United States and am currently unemployed. I would eventually like to have a job focused on computing and was wondering if getting a GED would let me start college early.<issue_comment>username_1: The first thing you should do is to check entrance requirements, and ideally talk with a counselor, at one or more colleges in your area. Some may prefer high school graduation to GED. Graduation implies an ability to follow-through and finish things which is important for college success. Upvotes: 0 <issue_comment>username_2: In general, the high school diploma is viewed better for getting jobs than the GED. Also by colleges. GED is (probably properly) viewed as a bit of a degraded HS diploma. That said, a GED is still far better than nothing. And of course, you can go on to many great things in life without a HS diploma. Know a guy retiring as an O-6 in the Navy, who ran away from home, entered the fleet, ended up at USNA (i.e. college) and then had a professional job. It's kind of unclear what sort of scenario you are describing. If you're in high school, get it done in school. No advantage to dropping out for a GED, if the plan is eventual college and a professional job. Different scenario if you want to work a trade, but even then I would still urge to get the HS diploma. Our society is overvaluing college...people here will disagree because they are employees of "Big College", but there's no reason for baristas to have college degrees. Of course, if you're a couple years out of school (to draw a line, let's say older than 19), just get the GED, easiest fastest and move on. As far as the computer career counseling (especially your comment on being unemployed), I would urge you to get a job, any job (assuming you're over 19). Even if it's not in computers, just get something, so you're working. It's good to be productive and will help your self esteem and respect from others. Also, there are still many technician types of jobs/training in the broadly defined IT industry that are available sans college, sometimes with some training. If you go that route, do something where the training is at night (so you can work in the day). Maybe you eventually do some night school college work also, but don't make that an immediate objective. Also look carefully over the different training options. There are some that are snaky (expensive, not very useful) and others that are high bang for the buck and high bang for the time.You need to research them. One practical note, if you're just 16-19, having family disputes and considering dropping out and moving on because of that, don't. Suck it up and get the HS diploma. It's pretty common for there to be friction (especially young man with old bull) at that stage in life. The military used to harvest huge amounts of talented people from this dynamic who entered sans HS diploma. It is still possible to some extent but even there, getting more restrictive. Upvotes: 4 [selected_answer]<issue_comment>username_3: > > I am in the midwest United States ... and was wondering if getting a GED would let me start college early. > > > In my experience, it's common for high schools in the US to allow some sort of mechanism for earning college credits during high school (often through a community college, but also potentially through a major university; many of my friends did the former, I did the latter on a limited basis). If your goal is to get a jump start on college, this would be a much better mechanism than dropping out and getting a GED. Names for these programs include "concurrent enrollment" and "post-secondary enrollment options." Talk to a counselor at your school. Upvotes: 2 <issue_comment>username_4: If your goal is to get into college early, you do not need a GED or a high school diploma. There are quite a few colleges that allow students to enrol early. Simon's Rock College exclusively enrols students who are starting college early. Most of their students do not get a high school diploma or GED, but many of them have got graduate degrees. This model is increasingly being copied by other colleges. <https://en.wikipedia.org/wiki/Early_college_high_school> If your goal is to get a job involving computing, you do not necessarily need any kind of degree or diploma; a record of successful projects may be sufficient. This is especially true if you want to start your own business. However, a traditional bachelor's degree in computer science will look good to many employers in that industry. A GED or high school diploma will count for little in a software company where most of the employees have bachelor's degrees. Upvotes: 0
2019/10/12
1,211
4,853
<issue_start>username_0: I am applying for a postdoctoral fellowship at an American University and I have to submit the research proposal. Besides the usual description of my own research, it is written that I must describe the > > resources necessary to conduct the project > and an informed evaluation of resource availability. > > > What does this exactly mean?<issue_comment>username_1: The first thing you should do is to check entrance requirements, and ideally talk with a counselor, at one or more colleges in your area. Some may prefer high school graduation to GED. Graduation implies an ability to follow-through and finish things which is important for college success. Upvotes: 0 <issue_comment>username_2: In general, the high school diploma is viewed better for getting jobs than the GED. Also by colleges. GED is (probably properly) viewed as a bit of a degraded HS diploma. That said, a GED is still far better than nothing. And of course, you can go on to many great things in life without a HS diploma. Know a guy retiring as an O-6 in the Navy, who ran away from home, entered the fleet, ended up at USNA (i.e. college) and then had a professional job. It's kind of unclear what sort of scenario you are describing. If you're in high school, get it done in school. No advantage to dropping out for a GED, if the plan is eventual college and a professional job. Different scenario if you want to work a trade, but even then I would still urge to get the HS diploma. Our society is overvaluing college...people here will disagree because they are employees of "Big College", but there's no reason for baristas to have college degrees. Of course, if you're a couple years out of school (to draw a line, let's say older than 19), just get the GED, easiest fastest and move on. As far as the computer career counseling (especially your comment on being unemployed), I would urge you to get a job, any job (assuming you're over 19). Even if it's not in computers, just get something, so you're working. It's good to be productive and will help your self esteem and respect from others. Also, there are still many technician types of jobs/training in the broadly defined IT industry that are available sans college, sometimes with some training. If you go that route, do something where the training is at night (so you can work in the day). Maybe you eventually do some night school college work also, but don't make that an immediate objective. Also look carefully over the different training options. There are some that are snaky (expensive, not very useful) and others that are high bang for the buck and high bang for the time.You need to research them. One practical note, if you're just 16-19, having family disputes and considering dropping out and moving on because of that, don't. Suck it up and get the HS diploma. It's pretty common for there to be friction (especially young man with old bull) at that stage in life. The military used to harvest huge amounts of talented people from this dynamic who entered sans HS diploma. It is still possible to some extent but even there, getting more restrictive. Upvotes: 4 [selected_answer]<issue_comment>username_3: > > I am in the midwest United States ... and was wondering if getting a GED would let me start college early. > > > In my experience, it's common for high schools in the US to allow some sort of mechanism for earning college credits during high school (often through a community college, but also potentially through a major university; many of my friends did the former, I did the latter on a limited basis). If your goal is to get a jump start on college, this would be a much better mechanism than dropping out and getting a GED. Names for these programs include "concurrent enrollment" and "post-secondary enrollment options." Talk to a counselor at your school. Upvotes: 2 <issue_comment>username_4: If your goal is to get into college early, you do not need a GED or a high school diploma. There are quite a few colleges that allow students to enrol early. Simon's Rock College exclusively enrols students who are starting college early. Most of their students do not get a high school diploma or GED, but many of them have got graduate degrees. This model is increasingly being copied by other colleges. <https://en.wikipedia.org/wiki/Early_college_high_school> If your goal is to get a job involving computing, you do not necessarily need any kind of degree or diploma; a record of successful projects may be sufficient. This is especially true if you want to start your own business. However, a traditional bachelor's degree in computer science will look good to many employers in that industry. A GED or high school diploma will count for little in a software company where most of the employees have bachelor's degrees. Upvotes: 0
2019/10/12
1,276
5,641
<issue_start>username_0: I wrote my statement of interests where one paragraph is dedicated to discuss my past works. But I wonder if that is necessary because it feels like repeating abstracts from the papers. And probably it will not make much sense to those who do not specialize in those topics. Should I just refer to the paper and make no formal introduction of what I did? Since there is a word limit, it is really hard for me to explain all those ideas in just a few words without assuming any technical prerequisites, and I feel it is not really stating my interests and my visions of my doctoral studies anyways, because I'm open to other related topics as well and don't want people to think "oh, this is a XXX-guy and our faculties do not work on XXX so he is a mismatch".<issue_comment>username_1: > > I wrote my statement of interests where one paragraph is dedicated to discuss my past works. But I wonder if that is necessary because it feels like repeating abstracts from the papers. > > > If your past work relates to the research interests you are raising in your statement then it would not be out of place to mention these works. Indeed, in many cases this will be useful context, since it sets out what you have already done, and frames your interests in the context of existing work. Mentioning these works would not necessarily entail repeating information in the abstracts to those works. Bear in mind that even if you do not mention your past papers, they will be present in your CV, so the academics reviewing your application will know that you have written previous papers. Ultimately it is up to you to decide if these papers add useful context or information about your research interests. > > ...probably it will not make much sense to those who do not specialize in those topics. ... > > > Then it is badly written and you need to write it better. Your statement should be for a general audience of academics in your broad field, without necessarily having specialist knowledge in your particular topic. These people are experienced researchers with PhDs and probably decades of experience in your degree area; if what you write does not make sense to them then you have written it badly, and you need to smack yourself on the wrists and try again. Learning to write about technical research in a way that does not get bogged down in minutiae is an acquired skill, but it is one you need to practice for entry into a PhD program. It should not be impossible to explain your past research in an way that makes sense to experienced researchers, even if your past research falls in an area that is not their particular specialty. Keep trying to write out a good explanation, and explain the essence of the work while shedding any unnecessary technical information, until you have something that makes sense. > > Should I just refer to the paper and make no formal introduction of what I did? > > > There would be no point to that, so no, you should not do that. If you mention your past papers at all, it should be to provide context for your research interests and the present context of that research. > > ...I feel it is not really stating my interests and my visions of my doctoral studies anyways, because I'm open to other related topics as well... > > > That will take about one sentence in your application to explain. Upvotes: 1 <issue_comment>username_2: I don't think username_1's answer says this strongly enough: **Your statement *must* include a description of your past work, if you have any.** The primary feature that PhD admissions committees looking for in applicants is **evidence of potential for high-quality independent research**. The strongest possible evidence of your potential to do high-quality independent research is the fact that you have actually *published* high-quality independent research. It doesn't matter whether your past research is on the same topics that you want to pursue in the future. It's easy to communicate a change in interests in your statement with a single sentence: "More recently, however, my interests have shifted to quantum tropical M-theory." (Obviously this sentence should be followed by a description of your *specific* interests in quantum tropical M-theory, written with enough maturity to convince experts in quantum tropical M-theory that you have the necessary knowledge and skills to be successful in quantum tropical M-theory, but that's what you were going to write anyway.) Crucially, if your previous papers have coauthors, your statement (and your recommendation letters!) should focus on your specific technical contributions to those papers. Remember that you are not trying to sell the work as good mathematics; you are trying to sell *yourself* as a good *mathematician*. In particular, you are not trying to sell your past work to experts in your subsubfield of (past) interest, so you should *not* merely quote the abstracts. You are trying to sell yourself to a general audience of trained mathematicians, *some* of whom are experts in your subsubfield of (past) interest. So your description needs to be understandable to that broader audience, but still interesting enough to experts that they will want to read your papers for the actual technical details. Obviously you should provide direct links to arXiv versions of your papers, in both your CV and your statement, so that interested committee members can read them without having to search. [I'm an American theoretical computer scientist, which means I'm *like* a mathematician, except that I write less code.] Upvotes: 3 [selected_answer]
2019/10/13
252
988
<issue_start>username_0: Public university salary information is usually available online. What about private universities? Can average salary by field and rank be found at least? Context: US/North America, mathematics<issue_comment>username_1: It's far from perfect, but you could try looking at AAUP's [Faculty Compensation Survey](https://www.insidehighered.com/aaup-compensation-survey). This won't allow you to search by field, but it will show you average salaries by rank for each institution. Upvotes: 2 <issue_comment>username_2: For the US and mathematics in particular, [the American Mathematical Society conducts annual salary surveys](http://www.ams.org/profession/data/annual-survey/salaries), broken down by various categories: public/private, highest degree offered in the department, rank, etc. For example, in 2017-2018, at "large" private PhD-granting mathematics departments, the mean salary for new hire assistant professors was $94,675. Upvotes: 3 [selected_answer]
2019/10/13
307
1,318
<issue_start>username_0: My referee is a pioneer of my university in my home country. I am applying for PhD in Australia. My referee has signed the letter of recommendation in hardcopy form but he hasn't put a stamp of his name on it saying that letter head needs no stamp. I have mentioned on letter head that he is Professor and Trustee. I have asked electrical department to put department stamp on it and they agreed. In this scenario is it likely that my chosen university in Australia may question why the pioneer and professor hasn't put his own stamp? Is it likely that the Australian university would contact my referee for it? Note that I am applying for a highly competitive scholarship in Australia.<issue_comment>username_1: Australian universities do not use or expect stamps. However, they may be familiar with the practices of universities in Asia and notice inconsistencies. I suggest this is unimportant. Upvotes: 4 <issue_comment>username_2: I'm at a UK University and we don't have individual stamps, nor do we put any official stamp on reference letters. I've written dozens of recommendation letters for students applying to Australian Universities and none have had a stamp. Almost all of those students were admitted, many with significant funding. Listen to your Prof, you're fine. Upvotes: 2
2019/10/13
197
877
<issue_start>username_0: Can you ignore the core curriculum at your university and still get into graduate school?<issue_comment>username_1: Some universities will allow you to start just about any course if they deem your professional experience relevant. Other universities won't. You will have to apply and, perhaps, have a meeting with a member of faculty. That is what I did anyway. Upvotes: 2 <issue_comment>username_2: This depends on the country. In the US it might in principle be possible because in the US degrees are not things with official legal status. In most (all?) European countries it would not be allowed, because university degrees are something officially recognized in laws. For example, in Spain it is a legal requirement (not something that depends on the university) to have an undergraduate degree in order to enter a master's program. Upvotes: 2
2019/10/13
1,309
4,941
<issue_start>username_0: In the second half of the 20th century there were a large number of English-language journals published in the West that carried only translations by Western academics of Russian-language articles. The original versions of these articles were written by Soviet academics and had appeared in Russian-language journals published in the Soviet Union. The English journals were commonly known as "translation journals" (not to be confused with translation *studies* journals, which are about the art and science of translation). Perhaps the most famous translation journal was *[Soviet Physics Uspekhi](https://iopscience.iop.org/journal/0038-5670)*, though *Physics Today* once listed [sixteen further translation journals published by the American Institue of Physics and associated societies](https://physicstoday.scitation.org/doi/abs/10.1063/1.2915931), and there may have been many many more in other scientific fields (and humanistic ones too, for all I know). Were such translation journals unique to the Cold War, or do they still exist today? A successor to *Soviet Physics Uspekhi* is still published today, under the name *[Physics-Uspekhi](https://iopscience.iop.org/journal/1063-7869)*, but I think it operates quite differently and for quite different purposes nowadays. In particular, its editorial office is in Russia, not in the USA, and it seems that it is Russian academics (possibly the authors, or someone working at their direction) who are supplying the English translations. So rather than serving the interests of English-speaking academics who wish to discover or disseminate the work of their Russian-speaking counterparts, the journal today seems to be serving Russian-language academics who want to bring their work to a wider English audience. Are there any scholarly fields today where a lot of research is being published in non-English journals, where these articles are being routinely translated into English **by translators unconnected with the original authors**, and where these translations are being published in dedicated journals?<issue_comment>username_1: In mathematics there are still regularly published English language translations of Russian language journals. Examples: * [St. Petersburg Mathematical Journal](https://www.ams.org/publications/journals/journalsframework/spmj) * [Functional Analysis and its Applications](https://www.springer.com/journal/10688) * [Russian Mathematical Surveys](https://www.turpion.org/php/homes/pa.phtml?jrnid=rm) * [Transactions of the Moscow Mathematical Society](https://www.ams.org/publications/journals/journalsframework/mosc) There are quite a few others. For most (if not all) of these journals it is often now the authors themselves who supply the translation (I believe this was sometimes the case long ago, although various US professional societies organized formal translation programs, sometimes with funding from agencies like the NSF; the issue is that translating research articles requires substantial understanding of the content, and this greatly limits the pool of potential translators). Here is a Japanese source journal: [Sugaku Expositions](https://www.ams.org/publications/journals/journalsframework/suga) Upvotes: 4 <issue_comment>username_2: Cover-to-cover translation journals continue to exist. There seems to have been a major loss of translation journals when the USSR broke up, but many of those journals still operate today. To give some Russian examples from fluid dynamics: * [Fluid Dynamics](https://link.springer.com/journal/10697) (Izvestiya RAN. Mekhanika Zhidkosti i Gaza, definitely translated professionally; the articles list the translator at the end) * [Journal of Applied Mathematics and Mechanics](https://www.journals.elsevier.com/journal-of-applied-mathematics-and-mechanics) (Prikladnaya Matematika i Mekhanika, ended in 2017) * [TsAGI Science Journal](http://www.dl.begellhouse.com/journals/58618e1439159b1f.html) (Uchenye zapiski TsAGI) My impression is that the translations are made by professionals with limited domain knowledge. I've noticed plenty of minor translation errors that would have been avoided if the translator had domain knowledge. Upvotes: 3 <issue_comment>username_3: In chemistry, [Angewandte Chemie (German, "applied chemistry") and Angewandte Chemie International Edition](https://en.wikipedia.org/wiki/Angewandte_Chemie) are twin journals having the same articles, Angewandte in German\* ([example](https://onlinelibrary.wiley.com/doi/full/10.1002/ange.201808371)) and the International Edition in English ([same example](https://onlinelibrary.wiley.com/doi/full/10.1002/anie.201808371)). Nowadays, one submits in English so the translation process is English -> German. AFAIK this is done by the publisher with the exception of communications which stay in English also for the German edition unless the authors supply a German version. Upvotes: 2
2019/10/13
767
3,444
<issue_start>username_0: I recently had a paper rejected: both referees pointed to a unique issue with our work. One referee's criticism was fair, hence I feel the rejection decision was fair. The other referee however, seemed to appreciate the paper and know the area extremely well (and thus, I expect to have them as a referee for any future submission). Their entire objection was based on an incorrect assumption -- if their assumption were true, I too would have suggested rejection. However, our results clearly showed this assumption is not true. Thus, their entire basis for rejection could be 1) very easily refuted in a response letter/revision and 2) may lead to future rejections if the referee is not corrected. I am wondering: Would it be appropriate to write a "response to reviewers" document to one referee to point out this issue? Is this at all common? I would not be appealing the rejection per se, but rather hoping that the referee better understood our work (with respectful tone and appreciation for the thought they put into their review). If not, should I simply move on? I should also mention, any future submission would explicitly state that this assumption is incorrect, but I would rather address point this directly with the referee, if it can be done so respectfully.<issue_comment>username_1: This is a waste of your time, and, more importantly, a waste of the time of people who already selflessly gave away their time working to give you feedback that helps you improve your paper, and to help the peer-review system function. The best thing you can do is not ask the editor and referees to spend any more time on your paper, unless you are contesting the rejection decision in good faith. And use their feedback to improve your paper so that other referees and readers don’t reach the same erroneous conclusion that this referee did. Upvotes: 7 [selected_answer]<issue_comment>username_2: I'll disagree with username_1 about this: I'd think it's totally fair to send the referee an rebuttal through the editor in charge of the manuscript. (And I say that as the Editor-in-Chief of a journal.) I also agree with the OP's comment on the other answer: "when I review a paper and raise questions or objections, I often lament that my questions are not resolved or explained when the paper is rejected. As a referee, I am constantly learning/thinking about new problems from new perspective: I think there is some (potential) value for a referee in having their questions address (and thus, helping their understanding of a new development they express genuine interested in)." I do think, however, that the better approach is to actually think about why the reviewer might have gotten the wrong idea about the underlying assumption. What was it that didn't explain the issue clearly and allowed the reviewer to go in the wrong direction with their review? Presumably, if a reviewer who is experienced in this field didn't get this important point, then others might as well. So the productive approach is to critically revisit your writing and , in any future manuscript on the topic, add sufficient discussion to ensure that this critical point is actually addressed and unambiguous. This way, you're not just correcting the opinion of that single reviewer, but of potentially any number of other readers who might have also wondered why you are writing a paper on an unfounded assumption. Upvotes: 5
2019/10/14
2,571
11,564
<issue_start>username_0: For context, I am a PhD student in bioinformatics, working partly in development of new statistical models and partly in applying models to data. I recently came across a paper which describes an interesting approach to clustering data. This method would have direct use for data I was working with at the time. The model itself was also interesting, and I am interested in using ideas from their paper to solve similar problems in my field. I tried to use their published modelling software to analyse my data. It crashed my R session. For context, errors in R code rarely cause the session itself to crash; usually they result in an error message, which one may easily find the source of. Unsure of whether this was my mistake (due to a coding error or data misspecification), I retrieved the data from their publication and attempted to run the code which they state produces the outputs in their paper. This lead to the same issue. I flagged this issue on the software repository, and another user mentioned having the same issue. I have replicated the issue in multiple independent environments using different software versions. I have also informed them that I have narrowed the error down to one or more lines of code, though I cannot exactly explain why it occurs (it would be too time-consuming to debug as I am not familiar with their code). The authors initially did not respond to the issues raised on the repository, and only did so after I emailed the corresponding authors several weeks later. They have only stated that the software works on their system and suggested that I try different software versions, which I have now done with no change in outcome. My initial goal was to ensure the issue was reproducible, and to raise it to the authors' attention. If the issue remains unresolved, and the authors largely absent if not uncooperative, how should one deal with this? I do not feel that this amounts to research misconduct, however I also feel unable to trust the results in their paper, given that the code which should replicate their results does not function in any respect.<issue_comment>username_1: As an R package author myself I can only say that you did the best you could. I'm grateful if users flag issues like this to me, and I do my best to correct errors in my code (which do happen) responding to such requests. I'm not sure how insistent you were with the authors; they may have said "try a different version" to fob you off, but may still react if you tell them "this-and-that doesn't work either". It is easy to get lost in changing versions of dependent packages etc., so occasionally an analysis initially gives a certain result and then something like an update of a required external package happens that breaks it. However I'd normally expect the authors to collaborate getting at the bottom of the issue. Assuming that the authors remain uncollaborative you are right to not trust their results and not use their method. To what extent you want to call them out is up to you really. Maybe their results were fine originally and they just can't find the time to investigate the present error. Who knows? But surely if others discuss the package feel free to share your experience. There's too much stuff out there that doesn't work! I'm sure this is an everyday story, unfortunately. Upvotes: 2 [selected_answer]<issue_comment>username_2: ### R crashing > > errors in R code rarely cause the session itself to crash; usually they result in an error message, which one may easily find the source of > > > yes. There are a few situations where session crashes are more probable: * with compiled code and * libraries installed on the system (as opposed to R packages) which are incompatible across versions/OS boundaries * particularly libraries that are "close" to the OS (I've had trouble in the past with interactive graphics across OS boundaries Linux vs. MacOS vs. Windows) Thus, if the lines in question can (easily) be replaced by equivalent (though maybe slower or more memory conuming) R code, the crash may be avoided and the scientific merits of the model may be evaluated. We may have an example here of the famous "premature optimization is the root of all evil". --- ### Amount of cooperation to be expected? As author and maintainer of scientific/academic software packages I certainly appreciate bug reports etc. I even more appreciate bug *fixes*, btw. However, I also have only limited resources available to maintain such software, and the longer back the publication was, typically the less resources (e.g. new employer is not willing for their staff to maintain projects at former universities) are standing agaist increased maintenance requirements. (Probably less of a problem here as the paper is quite recent compared to some code that was developed years and years back) Please also have a look at [Why do many talented scientists write horrible software?](https://academia.stackexchange.com/questions/17781/why-do-many-talented-scientists-write-horrible-software) The authors may be seeing the publication of the code as them being extremely nice in giving the public not only the abstract description of the model but even an example of an implementation. In fact, they may even have had an uphill fight to get permission to publish their code under an open license (I've been in the situation of *not* being permitted to do this by an academic research institution.) Of course it would be much nicer if they had written more rugged code in the first place or would now help you debug. But you don't have any kind of hard right to their time. I'm chemist and we sometimes have lab procedures that are difficult to describe in such detail that another lab with somewhat different conditions is [easily] able to reproduce them. Thus, people visit other labs to learn their techniques - this means a whole lot of effort for both sides and IMHO that needs to be appreciated. Similarly, I appreciate if a package maintainer does put in the time to deal with my bug report. And in consequence, I try to make it as easy as possible for them. There may be a mismatch in software development ability/possibilities here: guessing from the crashes that their code isn't at the highest level of robustness and that you didn't find them using continuous integration etc. vs. you providing them with a Singularity container: they may not know how to use that or may not have the possibility to use it or may not be willing to put in the time to get that particular virtual machine up and running. I may add that in many places (including universities and research institutes) there is in addition daunting burocracy to get the IT department to install further software on their machines (and they are the only ones with installation rights). I teach for the [carpentries](https://carpentries.org/) and it is not unusual to find course participants do not have the necessary software for this reason - even if they are officially sent to the course. You may get further if you *ask* them whether they do have the ressources to resolve the issue with your help and *how* you can help resolving the issue. --- ### Misconduct? > > I do not feel that this amounts to research misconduct > > > No, it isn't misconduct - it's just that the situation could be nicer. It would be academic misconduct if the code at the time of writing and submitting their publication on the authors' machines had not produced the results described (incl. crashing and not producing any results). In contrast, * writing code that is not portable or * fragile in the sense that it is quite likely to break when its surroundings (OS, interpreter, dependencies) evolve and * abandonware are not scientific misconduct even if it is not "best programming practice". ### Trust in their results? That's a difficult one. * On the one hand, with the possible exception of computer science, the scientific ability of the authors may be quite uncorrelated to their software development abilities (see linked answers above) * On the other hand, if their software doesn't calculate what they think (claim) it does, then also the scientific content may be compromised. For the case in question: * I tend to think that crashes (or stopping with an error message) are often relatively harmless in terms of scientific integrity of the paper. Short of blatant misconduct (claiming results that weren't obtained) this points to the code not being robust/maintained in an evolving environment (or data/formatting subteties) and not necessarily an incorrect implementation *on their system*. * I'd be more concerned about the lack of unit tests confirming the results of calculations that actually run through: statistics offers lots of possibilities to have logical error which lead to wrong but often even plausible numbers. That's what is scary to me... ### What to do * Be extremely nice to them. In many fields, it is not (yet) standard to publish code at all. They did you a favor in giving you an implementation that presumably works on their machine. In fact, as long as it takes you less time to fix & test their package than to write & test your own implementation of their paper, you have a net gain! I don't want to insinuate that you are not nice to them. However, my experience of some 10 years as (more or less active according to the ressources described above) maintainer of an R package is that the majority of help requests is answered with RTFM and many bug reports do not provide a minimal working example that I can reproduce and there are a few insistent and obnoxious help requests that suck up your time like a black hole because they try to offload the time they should put into learning R onto you package maintainer. While I try hard to treat all requests in a professional, friendly and timely manner, I also sometimes fail in that (most often fail is: timely). (And I have to say that those bad experiences are offset by also receiving extremely well written issue reports, sometimes with immediately usable pull requests and finding online contributors whom I'd otherwise never have known and even occasional "thank you for this package" emails. But there's a whole lot of truth to bad interactions producing far more impression than good interactions.) *You may just have had the bad luck to happen to be the one after a couple of toxic requests and/or may have inadvertendly triggered such an alarm with the authors.* Again while that is not the ideal interaction, you best bet to get the issue resolved is to be so supernice as to convince the authors that not all interactions on the internet are bad... Gives you good karma, too. * Encourage collaboration and make it easy for them: do put in the time to dig into those lines for debugging. Or write an R workaround and ask them to be so kind as to review your suggested changes. * In order to get the code up and running, ask them whether they can provide you a `sessionInfo ()` of their system and the versions of the relevant underlying libraries. The logical step after you trying several configurations that didn't work is to try and get a reproduction of "their" system to work. * Of course, it is completely fine if you decide to stop sinking time into a not-so-well designed package and either + write your own implementation of the model described in their paper, or even + completely abandon that model if you distrust the science behind it. Upvotes: 2
2019/10/14
695
2,927
<issue_start>username_0: I have a offer letter from a very well-known group in my field for a postdoc position. The proposed starting date would be early next year. However, I am currently also applying for a faculty (assistant professor) position at a different institution. Should I mention this offer in my application for the faculty position (on cv or cover letter) and/or include the official offer letter? *update:* there is another sublety for one application. One faculty position I consider requires experience abroad (i.e. not in the same country) of at least 1 year. I only have 6 months but the postdoc offer would be abroad giving me the 1 year total if I take it before starting the faculty position. Should I then mention the offer? my field is health and medical sciences<issue_comment>username_1: I think that it would be unwise to include a copy of an offer in another application. But whether you inform them of the existence of an offer is a bit more subtle. I doubt that anyone will rush to hire you just because you have another offer. They will evaluate you on other things as usual. So, at best, mentioning the offer initially gets you nothing. However, later in the process if you need a decision and they are delaying making it, you can let them know that you have a deadline. Whether that helps or hurts is also subtle. Whether it is wise, or not depends on the nature of any relationship you have been able to build with them. In the absence of any relationship, it might hurt more than help. It is easy to just cross you off the list unless they are very interested and have few other interesting applicants. Upvotes: 5 <issue_comment>username_2: To be blunt and clear, if anyone had done this on any of the previous hiring committees I have been on, we would have rejected them instantly. They would not have even been long-listed, let alone short listed. Having another job offer is not a reason for anyone to hire you. Indeed, it indicated a number of negative things: 1. You are the sort of person who goes all the way through the process of getting one job but still fishes around for something "better": read, you waste everyone's time 2. If they do want you, you are likely to try to pull off some sort of bidding war for you, increasing your package etc. That's annoying to everyone. 3. You are implying that the research group you have a PostDoc offer from is somehow better than the place you are looking for a Faculty Position from. Like some temporary position is as good as a tenure track one. Ouch! 4. You are lacking in subtlety, diplomacy, and general workplace etiquette. Basically, don't do this. It is a very bad idea. Upvotes: 4 <issue_comment>username_3: You do not show a letter addressed to you personally to other people without the sender`s (explicit or implicit) consent. Never, if you have any doubt, unless you get a formal court order to do so. Upvotes: -1
2019/10/14
2,646
11,704
<issue_start>username_0: I am graduate student, to finish my degree I need to build methods outperform what is already there. An issue that I came across with, is that two papers reported way (I mean more than 20%) more than what resulted from my reimplementation. This could be due to two reasons: 1. I missed something during the implementation. Which is what I have been telling myself. For months, I tried all possible combinations and possible paths. One of the methods is straightforward. Still, I could not reach their claimed performance. I contacted the corresponding authors, and no one replied. So I tried to contact the other authors. The first paper, the author replied and sent me the code. He/she told me to keep all details ”confidential”. Well, it turns out they they are not using the data they claim in their the paper, of course their results are different than my reimplementation. And my implementation was correct. The second paper author also replied and they didn’t send me the code because they say it is easy to implement, but confirmed that what I did is correct still I couldn’t understand why such difference. Both papers are published in <2 impact factor journals. Their web servers are not working. 2. They are not honest. Now I am stuck, my method does outperform my reimplementation of their methods but not what they claim. The first paper I can’t say anything because “it is confidential” the second paper I can only confirm that I correctly implemented their method for the most part (based on my chat with the authors) I know that I probably could not publish on this part of my work, because who is going to believe a young scientist who just started her way? But not sure how the committee are going to believe me. What can I say or do? Please help me<issue_comment>username_1: In the first place, you should consult this with your supervisors. The code for papers is often rushed and unfinished, and what works on one machine may not work on another for a number of reasons. The most reasonable way is to let your supervisors know that you implemented both methods, communicated with original authors (mention only non-confidential things/say some things are confidential/ask authors for permission to discuss the implementation with your supervisor), and yet you did not reach the performance claimed. As a senior academic capacity, they are better equipped to decide what to do with regards to politics of department/field/research teams, are bound to get quicker and more elaborate responses from authors of the papers and handle potential fallout should anything go wrong in the process. I would not advise to pursue this matter on your own, and surely if you have doubts about something this important to your project, it would be reasonable to seek their advise and they will understand that. Upvotes: 2 <issue_comment>username_2: People can be dishonest. They can also make honest mistakes and publish bad science. Don't assume that it is you who has an inferior result. And don't assume that a doctoral committee won't believe you. If they are competent to judge you without the earlier results they should be competent to understand what you have done. However, I have two suggestions. The first is to walk through what you have done with your advisor and/or another faculty member who is most competent to understand your work. You may, indeed, have the best results. If you can get support there, then the larger committee should be no problem. I don't think that you need to hide the communication you got from your committee members. It may be necessary to explain why you can't believe the reported results from the other paper. I don't think that "confidentially" really applies here. But the other is a bit harder. See if you can figure out exactly where the other group failed to match their methods to their results. If you can do that, then you have much stronger evidence for your own work. The evidence you mention here seems pretty strong to me (an outsider) that the other paper has a problem. There is no reason not to contradict it if it is incorrect, for whatever reason. Upvotes: 5 <issue_comment>username_3: You can write that you used your implementation of the competing method for your results, and that you were not able to reproduce the published results. Make your code available so people can check. It seems that the authors of the other papers didn't publish their code, so nobody can say you should've used that. Upvotes: 3 <issue_comment>username_4: There is absolutely no reason that you can't publish a paper that says "We compared our method to methods X and Y. Since code the original code was not available for X and Y, we reimplemented the methods to the best of our ability. The code for these reimplementations is available in supplementary files A and B. Our new method out performed the reimplementations of X and Y by z%. However, it should be noted that it was not possible to reproduce the reported results for X and Y. " People who want to know will have to look at your re-implementations and decide themselves if they think you have correctly re-implemented. Seniority has nothing to do with it - be transparent, and the world will judge if they believe you or the people that won't release their code. Upvotes: 8 <issue_comment>username_5: > > to finish my degree I need to build methods outperform what is already there > > > No, that is not true. You need to deliver a piece of proper scientific work and advance knowledge and that does not depend on *what direction* your findings point. Of course, things are easier and more pleasant if your implementation is better. But the actual scientific part of your thesis is to study both the old and your approach scientifically and then conclude whether one is better (and possibly in which situations). The difficulty in your situation is to proove that the discrepancy to literature is not due to your incompetence or lack of hard work (=> you deserve a bad mark) but actually due to "nature" not being as it was supposed to be by the previous paper. What you can and should report is * that you were not able to reproduce the findings in papers 1 + 2, * in consequence have been in communication with the authors. * Importandly, that your implementation has been confirmed as correct by private communication with the authors of paper 2 and by comparison with (confidential) code you received from the authors of paper 1 again by private communication for that purpose. * If > > Well, it turns out they they are not using the data they claim in their the paper, of course their results are different than my reimplementation. > > > means that you got the data set they actually used and got the same results with that, then you can also report that for a related data set, the same results were obtained. If not, it may be possible to kindly ask the authors of paper 1 + 2 whether they'd run a data set you send them and give you the results of their implementations so you can compare that to your results. You can then report (hopefully) that equal results were obtained on a different data set and thank the authors of those papers for running your data. The last two points should make amply clear that the discrepancy is not due to a fault in your implementation - which is what counts for your thesis. As a personal side note, I got top grade on my Diplom (≈ Master) thesis which (among other findings) found that the software implementation I was using did not work as it was supposed to. I was able to point out a plausible and probable reason for that bug (which may have been a leftover debugging "feature") - which is much harder for you as you don't have access to a running instance of their software that you can test (= study) to form and confirm or dismiss hypotheses about its behaviour. --- As an addition to what @username_2 explained already about the possibility of honest mistakes in published papers: As scientists we tend to work at the edge of what is known. Which also means that we're inherently running a high risk of not (yet) knowing/having realized important conditions and limitations of what we are doing. We thus also run a comparatively high risk that tentative generalizations we consider may turn out to be not all that general after all. Or that we may be plain wrong and realize this only later (or not at all). I believe it is very hard for humans to be completely aware of the limitations of the conclusions we draw - possibly/probably because our brains are "hardwired" to overfit. (Which also puts us into a bad starting position for avoiding overfitting in e.g. machine learning models we build) The take-home message from this is that we **need to be careful also when reading published papers**: we need to keep the possibility of the paper being wrong, containing honest mistakes or not being as directly applicable to our task at hand as we believe at the first glance in mind. --- > > I missed something during the implementation. > > > I experienced something similar once when I was also implementing a reference method from literature (related but different field). It turned out that different defaults in the preprocessing of the data caused the difference - but only after I had the bright idea of trying out to omit a preprocessing step - although the model doesn't make much sense physically without that step, but the paper didn't mention any such step (neither do many papers in my field who do use that step because it is considered necessary because of physics). --- > > 2. They are not honest. > > > While that is of course possible, I've seen sufficient honest mistakes to use [Hanlon's razor (which I first met as Murphy's razor)](https://en.wikipedia.org/wiki/Hanlon%27s_razor): and *not* assume dishonesty or misconduct unless there are extremely strong indications for that. --- **Proving superiority** may in any case be something that is *impossible* due to limitations in the old paper. E.g. if they report validation results based on a small number of cases, the uncertainty on those results may be so large and thus it cannot be excluded that the method is *better than it seemed* that truly improved methods later on will not be able to demonstrate their superiority in a statistically sound manner. Still, such a shortcoming of the old paper does not limit the scientific content or advance of your work. Upvotes: 5 <issue_comment>username_6: In addition to the other answers, you should consider publishing your re-implementation. Then any reviewers can check if they think your results are plausible or if they spot a flaw in your re-implementation. In the first case, it is right to say "We implemented paper X, but could not reproduce the claimed efficiency" and in the second case the flaw found by the reviewer may help you to improve your re-implementation, so you achieve a similar result. Most reviewers will not debug your code, but you did your best efford to username_6w anyone to verify your claims of less efficiency and at least your paper is as honest as possible. If the algorithm is interesting, publishing an open source version may get some users, that point out issues with your code (or contribute improvements) as well. But make sure not to be too close to the confidential code, as the original authors may claim copyright infringement. You may use clean room reverse engineering with another person or at least do it yourself by just using the given code to write down the parts missing in the paper and then reimplement it from the documentation and not from the code. Upvotes: 1
2019/10/14
795
3,659
<issue_start>username_0: I worked on a paper in computer science (CS) from half a year approximately, so I decided to send it to the JUCS journal. They kindly informed me that they were not able to find enough reviewers for my article and so I decided to send it somewhere else. After that, I sent it to another journal that covers general topics in CS, but it got rejected. The justification was because they said the article lacked of the necessary algorithmic formalism but the problem is that only one reviewer sent out their comments after almost six months. I thought my paper was almost forgotten, because when I wrote to the editor I never got an answer. After this rejection I submitted my work to a conference, and it finally got accepted. The recommendations were to proofread the paper, and well just to add more references. I consider that is valuable to mention that the journals and this conference are indexed in Scopus. The problem that I have is that I found that there is a top ACM conference in this field. So, I do not know if I should withdraw my paper from the conference that already accepted it and re-submit it for this new conference. The reason for doing this is because I am very curious about what would be the feedback of the reviewers in this conference. I know that maybe it will get rejected, because I saw that the submissions come mostly from well renowned universities such as MIT or Stanford. Bottom line, what should I do in this situation? Any advice? Should I just present my research into the conference that already accepted it, and after that to send it for comments to a professor in this area? Thanks<issue_comment>username_1: This is clearly a judgement call and I can only offer opinion. But if you withdraw from the conference at which it was accepted you won't be able to resubmit there if you are rejected by the other. This narrows your future options for this paper. I'd therefore suggest that you stay with what you have (a bird in the hand is worth two in the bush). But this is just one paper. Write more papers and expand your knowledge about good places to submit them. One paper is just one paper. It isn't your entire career. Upvotes: 3 <issue_comment>username_2: I strongly recommend that you not withdraw the article for the reason you have given. You probably have not thought about it this way, but I would consider it disrespectful to the volunteer editors and peer reviewers who took the time to review your article and give you feedback with the expectation that you will present your research to the conference to which you submitted. Although there are many good reasons to withdraw an accepted article, I do not consider yours to be one of them. You would be wasting their time without giving them the reward for their volunteered time: the opportunity to help participants in that conference for which they volunteered to see good research like yours. Even though many people might disagree with my view, please be aware that if ever in the future you were to submit an article somewhere and the same editor might handle your article as an editor or reviewer (and computer science peer review is usually single-blind), this experience would probably stamp your name in their mind, and not in a good way. Upvotes: 2 <issue_comment>username_3: On top of what username_2 said, you've already had a couple three rejects. Clearly this work is not clearly in demand. You don't even know if you'll get into that other specialty conference. Bird in the hand... Just go with what the current plan is and move on to other things. You don't want to keep messing with this, more. Upvotes: 2
2019/10/14
935
4,122
<issue_start>username_0: I am in my seventh year after my PhD, and am applying from a tenure-track position in Mathematics. I am working with a job coach, who is suggesting a 2.5 page cover letter for tenure-track mathematics positions. This seems long to me, I always thought that in math 1 page was max, but are the rules different when applying mid-career?<issue_comment>username_1: I think that the length of the cover letter is the type of thing that differs greatly between research institutions and small liberal arts colleges. At the small liberal arts college where I work, 1-2 page long cover letters seem to be pretty common. (I'm also in math.) There's no length requirement of course, but it's expected that in addition to the normal stuff (who you are, a bit about your teaching and research experience, potentially stuff about working with undergrads if it is relevant) applicants say a bit about what makes them a good fit for the school. Basically, small liberal arts colleges tend to have their own distinctive personalities, and it can be a major problem if your cover letter comes across as super generic. Also, remember that a lot of small schools don't really have research groups in any particular area. Thus, if they're hiring an analyst one year there is an excellent chance that it's because the previous analyst just left / retired. People will read your research statement of course, but many won't make it past the first page or two. In contrast, every member of the department will read your cover letter. So if the cover letter you send to our small liberal arts college looks like you could have sent it to a large state school as well, that'd be an issue. Having said all of this, it's not like you need to be super long winded in your cover letter. If you can say everything that you need to in less than a page that's totally fine! It's just that a lot of people will be reading your cover letter, so if you want to explain why you're committed to working at a small school like ours or explain the context of your decision to apply to our school (which could very well be the case if you're applying mid-career), this would be the place to do so. I don't have any direct experience with being on a hiring committee at a research department, but my impression has always been that the cover letter is considerably less important at these sort of places. (Perhaps it's even seen as an unnecessary formality? I've always found it weird to read 'Enclosed you will find...' when it's just a PDF file on MathJobs and not even a physical letter.) For example, when I was a postdoc I had a number of friends that used a computer script to insert the names of the schools they were applying to into a generic cover letter. It's not clear to me that this affected any of their searches in the least. Hopefully someone with some experience at these sort of places will respond to your question, but I think that at research schools there are much more likely to be well-defined research groups with members that are going to be able to read your research statement carefully and already know your letter writers, and that these sort of things are likely to vastly outweigh anything you could write in your cover letter. Upvotes: 2 <issue_comment>username_2: As someone who has chaired several search committees in the last five years, I can tell you that there's very little chance that I'd read a cover letter that was longer than 2 pages long. I've got hundreds of applications to read on mathjobs.org, so there just isn't time for this. The cover paper can be helpful in explaining why you're particularly interested in our position (a spouse working in a nearby city, family who live in the region, interest in research collaboration with faculty in other departments.) It shouldn't take more than a couple of paragraphs to explain that kind of stuff. You will have a separate CV, research statement, and teaching statement, so those aspects of your application don't belong in your cover letter. What is your coach suggesting you put into this 2-3 page long cover letter? Upvotes: 1
2019/10/14
2,162
9,039
<issue_start>username_0: I was working for a company where we put forward a paper with other colleagues. I was the first author and the paper got rejected. After that I left that company. Recently, I found out that they published the paper removing my name from the list of authors and they simply put me in the acknowledgements. The paper was basically the same. They did not even want to answer me when I asked for clarifications. After I asked the editor to intervene, their excuse was that because I left the company, they could not put my name as a researcher for their institution and that the research belonged to them. Because of that I had no rights for authorship. They even threatened me in case I have drafts of the first submission stating that I have no rights to take them with me when I left. Can such claim was made and the authorship entirely removed based on the project ownership. I was clearly the main contributor Thanks for your answers<issue_comment>username_1: This would probably depend on the contract you had with the company when you worked for them. It is possible that they do, indeed, own your IP for that period. Such contracts are actually pretty common. You may also have a non-disclosure agreement, preventing you from discussing your work there. But there may also be national or other laws that limit what can go into contracts of this sort. But be aware of the consequences of the employment contracts you sign. Some companies will negotiate the terms within reason. Others not so much. Upvotes: 2 <issue_comment>username_2: This will also depend on legislation. The particular IP rights of concern here are the so-called [Moral rights](https://en.wikipedia.org/wiki/Moral_rights) which include the right to attribution. In some legislations (e.g. in continental Europe) the moral rights cannot be transferred or waived (only the rights for economic exploitation are transferred to the employer), while in others (e.g. US) this is possible. (It is probably their right to demand that you do not take any work-related documents such as the drafts with you. Where I am, even academic employers can do that, and I've had one such employer who excercised that right. I'd expect that your [competing] rights to keep material that allows you to defend/proove your position in case of a litigation at court between you and the [former] employer to be extremely dependent on legislation. [Here in Germany, you would probably not be allowed to keep the draft - instead you'd have to ask the court to order the employer to show that draft if needed.](https://www.arbeitsrecht-weltweit.de/2018/01/30/zurueckbehalten-von-geschaeftsunterlagen-gefahrgeneigte-taetigkeit/)) --- **Update: link to academic authorship rules** @CaptainEmacs correctly points out that academic authorship follows an additional set of rules. I expect that at some point during the submission of the paper the [remaining] authors have signed paperwork where they e.g. the [Committee On Publication Ethics](https://doi.org/10.24318/cope.2019.3.3) sets as a minimal standard for journals in their Authorship: > > At a minimum, authors should guarantee that they have participated in creating the work as presented and that they have not violated any other author’s legal rights (eg, copyright) in the process > > > IMHO, signing the respective statement "pulls" the academic authorship rules into a legally binding contract. --- For OP, I see three possible approaches (short of letting the affair slide): * Getting legal advise. Many academic institutions do have ombudspersons that are experienced with such situations. Even if the old employer is not an academic institution, it may be possible to get a legal opinion at a nearby university. Even if the ombudsperson says they are not allowed to advise outsiders, they should at least be able to point OP to someone (lawyer) who is knowledgable/specialized on such trouble. * Contact the journal. As the journal's reputation and integrity depends on their contracts with their authors being as they claim, they should look into this. * [COPE also has a database of cases](https://publicationethics.org/guidance/Case) and OP may look whether they have given recommendations for situations as theirs. I had a quick glance and the involvement of a non-academic employer may make this a case without precedent. Usually, COPE seems to recommend that authorship disputes should be settled by the [academic] institution which is obviously expected to have a procedure in place for such happenings. These reports (IMHO rightly) notice that it is very difficult for a journal editor to actually find out authorship - they can usually rely only on what they are told whereas an institutional investigation has access to the actual documents/emails/draft versions etc. This is going to be *very* difficult in OP's case. Upvotes: 3 <issue_comment>username_3: Other responses talk about the law. However, if the paper, as I assume (after all this is academia.SE), is academic, there is another set of values that rule the matter, and that is defined by the academic rules for authorial attribution of scientific contributions; unlike patent or copyright ownership, they are not waived by working for a company. Moral rights or not - if you have significantly contributed to the paper (as evidenced by the first submission round), academically spoken, you are a co-author. Removing you is unethical and a breach of academic rules. It is academic misconduct. There is no "ghostwriting" in academia. Laws or not: the decision to take away a doctoral title is in many countries not a legal, but an academic decision, to be decided by the academic institution that conferred it and not by court. Similar here: the judgement whether this is academic misconduct is happening on the academic, not the legal level. **Note**: the fact that they demand the drafts seems to indicate that they know this and try to deprive you of the evidence that you have been co-author on a virtually identical copy; which, if in your possession, would prove that you were a co-author and should remain so for the resubmission. This may well be a legal trick on their side to improve their position in an academic misconduct investigation. On that part of the matter, you may need the support of a lawyer if you intend to fight, because of course, the company may have the ownership of the ideas and the copyright. **But not the right to remove you as co-author.** Upvotes: 7 [selected_answer]<issue_comment>username_4: In addition to @username_3' fine answer: An academic paper is *inter alia* a testimony by the authors that the facts and theses presented therein are correct and true to the best of the authors' knowledge. Centuries of experience has shown that the scientific method rigorously requires this personal accountability. Although the authors' names traditionally are placed under the title of the paper, the names might as well appear at the end of the document, under a declaration such as "We, the undersigned, do hereby aver and testify that the facts and theses presented above..." etc. etc. If you remove your name from the authorship of a paper, you are stating that you no longer believe that the contents of the paper are scientifically sound. If *someone else* removes your name from the authorship of a paper, *that person* is stating that *you* no longer believe that the contents of the paper are scientifically sound. This is fraud; there is no other term for it. Your ex-employer may be free to make any use of your work product, but your ex-employer is not free to make false claims about what you are prepared to testify to. If you were disposed to bring a lawsuit, this would seem to be your best basis on which to seek relief. Upvotes: 3 <issue_comment>username_5: > > their excuse was that because I left the company, they could not put my name as a researcher for their institution and that the research belong to them. > > > Other answers regarded the general question, I'll just say that this excuse is invalid: 1. They can still be the owners of the IP even if you are recognized as the author - you simply transferred ownership. 2. Companies publish papers all the time where some of the authors have already left and it is no problem at all for them. Upvotes: 3 <issue_comment>username_6: Others have already argued that this is not OK, but probably you are asking yourself what to do now. You haven't insisted enough with the editor. **Ask formally for retraction**. Tell them the other parties refuse to collaborate and you cannot solve the issue with them. Present the editor all evidence you have, and especially put them in contact with the editor/journal that handled the first rejected submission. They will surely have the first submitted manuscript on record, and that is strong evidence in your favor. *(Converted from a comment upon moderator request, even though it does not answer the stated question.)* Upvotes: 3
2019/10/14
823
3,068
<issue_start>username_0: I've asked a professor who I did research with over the summer to write me a letter a recommendation for graduate programs, but he hasn't responded for over a week. I've resent him an email and have received no answer, but a peer who also worked in his lab over summer got an almost immediate response when asking for a letter. We have both done the same research in his lab and known him for the same amount of time. I don't mind if he declines, but I haven't received any word from him. Should I just move on and ask another professor?<issue_comment>username_1: Stop by his office (if in town) or pick up the phone. Just ask him directly if he'll do it or not. A lot of people are not so email/text centric. He'll either say yes or no, but at least you'll know the answer. (And if he says no, just drop it and move on--don't argue.) Personally, I wouldn't cross off and move on, just because of not responding to email. Especially if he's the best one. He is probably just busy. Or a procrastinator. Voice or face to face are direct. An email is like a letter. Easy to ignore, to get to it later (which becomes never), etc. When you do confront him if he says, he's unsure of what you did, offer to give him some bullet points and a BRIEF explanation of what you're going after and key features to emphasize. (In the nature of making his job easier.) Upvotes: 2 <issue_comment>username_2: No, contact him again. You gain nothing by "moving on", and you deserve a response. But let me paint a picture for you. I'm that professor. I'm very busy and get lots of enquires, both IRL and virtual. I try to answer mails as they arrive, but can't always do so. Busy. Interruptions. Classes. Exams to write. Exams and homework to grade. Lost in thought for a day over a nasty research issue. Busy office hours. So, some thing slip. And the new mails keep coming in - piling up, 30 or more per day. And I don't really have the time or mental energy to scan older mails. So, there are gaps. Or maybe I get a chance to scan it, but your email header doesn't wake me up enough to deal with it. I also don't think to check my junk box in case a stray word in a valuable mail has triggered the mail bot. Well, I try to get it right, but I'm not perfect and don't have an assistant to remind me about stuff. Also I'm old and so my mind wanders and my kids have issues of their own that I need to deal with. But, I'm not a monster, just a human. If you ping me again I'll probably say to myself "oh crap, I shoulda done this last week". And likely deal with it. If you pop into my office (lucky to catch me) I'll think the same thing and probably apologize. I'm not avoiding you and my job is to help you, even if I'm unwilling to write that letter for you. But I realize you need a definite answer. If you don't ping me then maybe (just maybe) I'll remember that I "shoulda done that" and catch up, but maybe not. Ping me. It's ok. Let's see how we can fix it. I owe you that. It's part of why they pay me for this wonderful position. Upvotes: 1
2019/10/14
1,138
4,455
<issue_start>username_0: I am 26 years old and I got an offer for a PhD in Machine Learning at Oxford. However, currently I am a software engineer at Facebook and I am living a very `well-off` lifestyle. I applied for the PhD last year, not knowing if I would get it and I decided to work as a back up. Now that I have a great paying job it is hard for me to want to do the PhD even with the scholarship. However, I know the work I can get after completing a PhD at Oxford is much more interesting to me (my work is currently not very interesting) and I could be making more than I am now. My question is: will it be worth it do take 3-4 years to do that PhD (and graduate around 30) or should I just stay at Facebook even though the job is not very interesting for me.<issue_comment>username_1: Stay at the industry (I'm an IT person and I do have a PHD). At least until you have enough savings to be able to pay for the PHD and all your expenses for 4 years straight up without working. Your job may not be as interesting now but you can slowly transition, learn or specialize in what interests you, meanwhile a PHD only teaches you how to bull 4 essays in less than 1 week, cite in APA and describe stuff. You could have the cure for all types of cancers but to academia it is useless unless you get into the paper publishing game of writing useless theoretical frames explaining where the word cancer came from and who researched that word before even explaining whats the cure. (Just saying this will hurt a bunch of sensibilities. Expect many downvotes) Plus a 3-4 years PHD is already out of question for the demands of the modern world. 2 years is the average now. The option of working and doing a PHD simultaneously is hellish for your health (among my PHD generation I ended up with column and sciatica problems, a friend got facial paralysis, there were divorces, loss of jobs, ulcels, a mental crisis...etc. And that IS sadly normal.) Oh! and to get some scholarships normally you are required to be a full-time student, so you cant even work if you want the scholarship. Besides, you can do a PHD at any age; while getting up on your career and good money becomes difficult with age. Plus you lose on retirement savings. In sum, unless you wish and plan to enter academia as a full-time professor or researcher I would recommend you to skip on the PHD for now or analyze better your life plans before choosing. Think why would you really want to do a PHD. Upvotes: -1 <issue_comment>username_2: You must make this decision - no one else can make it for you. Things to think about: * Uninteresting jobs are bad. If you can't see your job getting more interesting, then say 5-10 years down the line there's a good chance you'll get so bored your performance starts suffering, and then you might get fired. * You know the industry very well since you have a first-hand insider view. What jobs do you enable by getting a PhD? If they're more interesting jobs (do you actually *know* they're more interesting, or do you just think they're more interesting?) that you will never be able to do without a PhD, that's a powerful reason to get a PhD. On the other hand, it's conceivable that you'll be able to do those jobs without a PhD simply by staying put, acquiring more experience, and doing well. You'll know better than anyone else. * Do you have a significant other or a family to provide for? If so, what do they think? Remember that usually, once people experience a luxury, going back to a less-luxurious lifestyle is typically a big no-no even if it was previously acceptable. Even if you can accept it, your significant other / dependents might not. * What does your current manager or other senior colleague think (if you trust them enough to talk to them)? Just as important, are they willing to rehire you after you graduate? * Is it actually economically better to do a PhD? Run some calculations. If you stay put, you earn $X this year, $X + [increment] next year (if you have one), etc. If you go to Oxford, you earn $Y for 3 years where Y < X, but then after 3 years you earn $Z where Z > X. How many years does it take before the second option exceeds the first, if it exceeds the first at all? The fewer years this takes, the more attractive the PhD becomes. These things are too intricate to say more, unfortunately. As above, you must make the decision - no one else can make it for you. Upvotes: 3 [selected_answer]
2019/10/15
1,486
6,594
<issue_start>username_0: I just had a job interview (permanent lecturer position not connected to a grant) and, immediately afterwards, I found out that the person who is the group leader and responsible for recruiting is the sibling of one of the candidates. I made sure that this is actually the case. It is not a suspicion; it is a fact. Could this be justified under some circumstances? This seems ridiculous. Should I raise a complaint? This in the UK.<issue_comment>username_1: Interviewing a family member is not permitted. Yes, you should raise a complaint. Do it politely, of course. But in the end, you have to ask yourself whether or not a work group that would even try to get away with this is one you want to get involved with. Upvotes: 6 <issue_comment>username_2: Is it ridiculous, maybe. Is it justified, yes. Should you raise a complaint, no. It is ridiculous when HR gets in the way of a group leader not being able to hire the person that the group leader thinks is most qualified. HR would probably prefer the group leader to not be involved, but then you lose a key person, possibly the only person, who can evaluate the technical competency of the candidates. Realistically HR will probably be happy with an independent observer of the interview and might even be happy with documentation that the selected candidate is the best candidate. Either the process has been approved by someone, in which case you are not going to get anywhere and just come off as a complainer. If it hasn't been approved, then you just made an enemy of the group leader, which is not helpful either. Upvotes: -1 <issue_comment>username_3: From my personal experience, very unsubstantiated and anecdotal, family hire and similar conflicts of interests do still happen in UK universities. It is very frustrating and demotivating for other candidates, particularly when your skills match the job description well and you've put a lot of time and effort to prepare the application and the interview presentation. Such conflict of interests are of course unethical and potentially illegal, but it is not easy to prove a case, particularly if HR are inclined to turn a blind eye towards the problem. If you want to raise a complaint, take care not to reveal your identity to your immediate line manager (Head of School) and explicitly request form HR to maintain your anonymity, particularly if you are still on the probation period. It may be a good idea to talk to your local unions. Good luck. Upvotes: 4 <issue_comment>username_4: You describe a situation as you see it, but you have no way of knowing or understanding what's going on behind the scenes. My own assumption would be that if you know about this, than others know about this, including members of the home department that work with the group leader in question every day. There is an obvious appearance of a conflict of interest, but there are ways of managing conflicts. My own assumption, which can certainly be wrong, is that this is so obvious that they must be managing the conflict. So, should you complain? I don't know laws or procedures in the UK, but I'd first ask, who do you plan on complaining to?? Next, keep in mind that you don't know the whole situation, a complaint might be impugning the integrity of a person when the whole conflict might be handled behind the scenes already. This is a fairly serious accusation. You'd also be impugning the standards of the department, questioning whether they'd let such a conflict stand unchecked. Next, lets say the conflict is unmanaged. Would you really want to work in an environment where such obvious conflicts are left to stand? Not getting the job may be a blessing. With all this in mind, and the possible harm to your own reputation this may cause, I wouldn't recommend complaining. There may be nothing improper going on, and if there is, it just doesn't seem like you have any options that can improve your situation. I'd move on, and see if you get the job offer. Upvotes: -1 <issue_comment>username_5: Background ---------- Some countries, and I don't know about the UK specifically, have requirements at state-funded universities when it comes to hiring. One such requirement is that someone cannot be hired directly. The position needs to be opened and announced in some public medium, and kept open for at least X time so everyone has time to apply. Then interviews, etc., are carried and the best candidate is hired, if one is found. Another requirement is that those with a conflict of interest with any of the candidates should state so. And hopefully excuse themselves from the hiring committee in order to not influence the result, independently of whether this is mandatory. In practice, sometimes a candidate has already been chosen in which case the position requirements are *tightened*, if possible, to ensure their champion is the right fit for the position. This leads to positions, which have already been filled, being opened with the sole purpose of meeting hiring regulations. Consequences ------------ This is the most important part. I'd like to speak of consequences. Raising a complaint brings you little or no benefit, but might gain you an enemy: * The end result may or may not change, but complaining will ensure you're not hired. The group leader is the accused here, sounds unlikely he'll hire you to his group after your complaint. * The group leader might affect others' impression of you, not only on this particular group but on other locations as well where he might know someone. If he didn't bother excusing himself from interviewing his sibling (which is morally reprehensible, if not illegal) then this seems a possibility. * Between two prospective candidates of equal competence, I'd guess the one most likely to be hired is the one not known to be a *troublemaker*. There might be some degree of privacy when presenting a complaint, but I don't know how these are processed. Meaning there's a chance the accused party would not know who presented the complaint. In this case in particular, you're simply pointing out something that is easily proven. You don't have to present a lengthy justification. Complaint without complaining ----------------------------- A simple email to the right person *asking* whether such behavior is allowed by the institution's regulations might be all is needed for someone to look into it. You could add a note stating you'd like your identity to be kept private for fear of reprisals. But without an official complaint it's possible they'll ignore it. Upvotes: 2
2019/10/15
1,139
5,128
<issue_start>username_0: I've served as an area chair for several conferences. Following the review period, the program chairs asked the area chairs to submit to them an annotated list of problematic reviewers. Mostly these were reviewers who failed to submit a review at all, despite repeated reminders, but we were also asked to identify reviewers who submitted exceptionally low-quality reviews, such as reviews that were so short/vague as to be useless, and reviews whose factual errors were so glaring that it was obvious that the reviewer either lacked even the bare minimum subject-matter knowledge, or else didn't bother reading the paper at all. The purpose of collecting these lists was to construct or to supplement a blacklist for use with future program committee invitations. That is, people on these blacklists would not be invited to review for future conferences operated by the same scholarly society. I am wondering whether the coming into force of the EU/EEA [General Data Protection Regulation](https://en.wikipedia.org/wiki/General_Data_Protection_Regulation) has any implications for these blacklists. In particular, is the blacklist itself considered "personal information", and is it even lawful to compile such a list? Can the reviewers mentioned on this list use the provisions of the GDPR to force its maintainers to disclose their presence on the list or even to remove them from the list? How about the area chair comments used to construct the list—are these something that blacklisted reviewers can force the conference organizers to turn over to them? I realize that this is a question about legal principles and practice, and so might be also (or maybe even better) appropriate for the [Law Stack Exchange](https://law.stackexchange.com/) or a lawyer. But since the question treats a uniquely academic scenario, I'm hoping that someone here might belong to a scholarly society that has already looked into the matter, and can therefore provide a brief summary for the present academic audience.<issue_comment>username_1: You cannot force editors to use de facto bad reviewers if they know about them. However, the actual problem with today's ease of data collection is that you can easily accumulate lots of (possibly low-quality and unvetted) data; a blacklist being sent around can constitute slander (or whatever you call it in your jurisdiction), even independently of GDPR. IANAL, but in my understanding, GDPR might be used by a person suspecting blacklisting to force the organisers to reveal its existence and one's presence on it, even if its transfer from one years' organiser to next years' organiser could be justified under its rules. With a cascade of further legal consequences in its wake which are not just limited to removal from the list. I do not know if they can have a look at the blacklisting comments, but if the comments are sent around with the blacklist between year committees, I estimate that they fall under GDPR. Having some basic rules of data hygiene (and I consider GDPR as such) is not an entirely bad thing. Best is, leave yourself extra time to get additional reviews if reviews are not up to scratch and don't send blacklists around. **TL;DR** GDPR, as I understand it (IANAL), permits people to ask their presence on a blacklist to be revealed, especially if it is institutional (i.e. sent around between organisation committees, not just for individual use of the particular editor). Upvotes: 0 <issue_comment>username_2: The GDPR of course doesn't cover all possible scenarios. However, it's fairly logical. You're **not** allowed to have and use personal information, **unless** you meet at least 1 of 6 specific reasons. "Consent" is the best-known, because we've all been asked hundreds of times in the last year. But obviously that's not going to fly for a blacklist. No, the reason which can justify a blacklist is (6e) processing is necessary for the performance of a task carried out in the public interest or in the exercise of official authority vested in the controller. Academic publishing is probably not done "in the exercise of official authority", but it definitely is "a task carried out in the public interest". Having established that there's a reason, most of the other GDPR requirements become technical. There's a right to ask for *factual* corrections, but if the facts are in dispute you only need to record the objection. You are not forced to decide. Now the biggy is of course the "right to be forgotten"/right to erasure. This is not an unconditional right. The person invoking this right must state a reason, and this must hold. "Withdrawing consent" is a reason, but it only applies when consent was originally necessary, which is not the case for the blacklist. "No longer necessary" might make sense if someone retires entirely, but we know that in academia retirement is often not a black&white matter. The good thing here is that you can reason about actual removal requests if and when they arrive. There is not much preparation required other than the general GDPR rules for good recordkeeping. Upvotes: 2
2019/10/15
1,200
5,244
<issue_start>username_0: I'm a PhD student currently applying for postdocs and while I have secured my first and second letters of recommendation from faculty, the professor that had agreed to write me my third has unfortunately passed away recently. I don't have a fourth professor sufficiently familiar with my research to write a strong recommendation letter. If I were to ask one, it would be one of those "good but also kinda filler" letters written by someone who knows me but doesn't *know* me. Alternatively, I could ask a postdoc I worked closely on several projects for a letter. He would be able to write a strong letter, but of course that's still not ideal because he's a "mere" postdoc. Which one would you recommend? A good letter written by a professor mostly as a favor, or a strong one written by a postdoc? (There are [similar questions](https://academia.stackexchange.com/questions/20274/can-i-ask-a-postdoc-who-has-closely-supervised-my-research-to-write-me-a-letter) on this site, but those concern Master students requesting letters from a postdoc when applying to a PhD position. Since a postdoc is still above a PhD student, the context is different, so the answers there don't necessarily apply here.) EDIT: I'm applying for postdocs in Europe and North America.<issue_comment>username_1: Check the policy of the place for which you are applying. My guess is that there are no hard rules as to who may or may not write a recommendation letter. If by any chance they require the writers to be faculty members on universities, you know your answer. If they require the writer to hold a PhD (which I still doubt), then most likely a post-doc that knows you well and can vouch for your work and interpersonal skills is a decent pick. Do consider that former bosses, even those you had at internships are good sources to vouch for your dedication, discipline, interpersonal and communication skills. They just won't have a strong case about your academic prowess. In your case, since you already have recommendations from other 2 professors, I'd guess there is enough evidence of your skills as a researcher, while recommendations also have a focus on these other qualities. Also, keep in mind that some people might not trust a letter that is apparently written by a friend of yours, with whom your relationship is mostly personal, even if you've worked together at some point. A trustworthy letter should be written by someone who has a predominantly professional relationship to you. Upvotes: 1 <issue_comment>username_2: In every situation I know of, a postdoc would be allowed to write you a letter. In general it's not a good idea to get a letter from a postdoc when you're applying for a postdoc. But that doesn't mean it's never the best option. A strong letter from a postdoc isn't going to actively *hurt* your application, it's just not going to help you the same way a strong letter from someone with more experience would. The main problem is that a postdoc doesn't yet have the perspective to see what makes people successful in the long run and doesn't have a broad enough pool to compare you to. It matters here whether the postdoc has great work and is relatively well-known, and it also matters how far out from PhD the postdoc is. There are strong senior postdocs in my field whose opinion I would give a lot of weight to. I don't know all the details, but it sounds like asking the postdoc might be a reasonable decision in your unfortunate situation. All that said, I think your question misunderstands the purpose of letters in a way that's quite common among graduate students. It's not necessary that your letter writers all *know you* so long as they *know your work*. It's better to have a letter from an important figure in your field who understands your work, than to have a letter from someone local who knows you really well. I always encourage graduate students to try to get one letter from someone at a different school. If you gave a talk somewhere and had a productive conversation afterwards with the person who invited you, that person might be an excellent choice for a third letter. It's pretty late in the process right now, so it might not be possible to get a third letter from an outside expert in a timely fashion, but in general it's something graduate students should be looking for. You should not be getting letters from local professors who know you from class but who can't understand your work. Upvotes: 4 [selected_answer]<issue_comment>username_3: Why not ask both the professor and the postdoc for a letter? The number of letters required for a job application is often a minimum but not a strict maximum - in case this is unclear you can check with the places where you’re applying if an extra letter will be permitted. Mention the unusual circumstances, which make the idea seem very reasonable in my opinion. Even if the answer is that this is not allowed, no one will think any less of you for asking. As for your literal question, yes, a postdoc can certainly write a letter for a postdoc application. Though its generally true that such a letter will carry a bit less weight than an equally strong one from a more senior person. Good luck! Upvotes: 1
2019/10/15
1,721
7,632
<issue_start>username_0: This is something I've been curious about for a while, and there's little public or explicit information about it. It's obvious that some journals are considered very prestigious and other journals something less than that. It seems most seasoned professors have quite specific opinions on the matter. But how did things get this way? It seems that almost all journals superficially look and operate about the same. I work in mathematics, but the question of course applies to all fields. In some cases it's understandable. For example, the "Journal of the American Mathematical Society" would be expected to be a top journal since it's the flagship journal of a major mathematical institution. On the other hand, there are journals that are considered "top journals" without being attached to an important institution, having a long, distinguished history, or any other obvious reason. How did such journals acquire their reputation? I'm hesitant to start naming specific journals in this question, though it would be interesting to see some representative examples in the answers. Reputation seems very much set in stone in the short term. And yet, over time, some journals rise in reputation while other ones decline. What causes this? One possibility is that a journal hurts its reputation by behaving in an unethical or irresponsible way, but this would not be the norm. To phrase this another way, I imagine just about every journal wants to improve its reputation. So what are the tools in the toolbox that a journal, or a community of academics who are invested in its success, can use to achieve this? I'm sure part of the answer is "the editors". But what do they do? Is the work of a journal/editor deeper than "wait for papers to get submitted and then accept the best ones"? I'm sure having a prestiguous editorial board helps, but in my experience even the "pretty good" journals still have top mathematicians as editors.<issue_comment>username_1: **TL;DR** **Top journals tend to have that reputation because they're regularly publishing top papers. Similarly, journals that have a reputation for being "merely" pretty good tend to mostly publish papers that are "merely" pretty good.** I think that far and away the most important reason that certain journals acquire reputations for being strong journals is that they publish very interesting papers that prove important results. For example, if I'm asked to justify my belief that the Annals, JAMS and Inventiones are among the very strongest of all journals in pure mathematics, I'm definitely not going to say anything about mathscinet quotients or impact factors. Moreover, even though having a strong editorial board is incredibly important, I wouldn't mention that either. Instead I would use the fact that off the top of my head I can list a large number of seminal papers that were published in each of these journals. Similarly, if all the papers that I've read in a given journal seemed boring to me (or perhaps not even 100% correct), then I'm not going to think much of the journal and probably won't submit any papers there, regardless of what the many other journal ranking metrics say about the journal. Since you asked for examples of new-ish journals that have acquired very strong reputations without being the flagship journal of an important mathematical institution, here are two: [Geometry & Topology](https://msp.org/gt/2019/23-4/) and [Algebra & Number Theory](https://msp.org/ant/about/journal/about.html). These journals were founded in 1997 and 2007 respectively and have already acquired reputations for being amongst the very strongest journals in their fields. The reason for this, I am sure, is that they both regularly publish extremely interesting papers. How do they attract such strong papers? I'm sure that part of the reason lies with their incredibly strong editorial boards. Having an editorial board full of very strong, well-known mathematicians is likely to result in a steady stream of strong papers since one of the most important factors to consider when choosing a journal to submit a paper to is the editor that will handle the paper. If I have a paper that I think is very strong then I'm not going to want to send it to an editor that I've never heard of and that might not even know what sort of people will make good referees. Instead I'd try to find a journal (at more or less the appropriate level) where there's an editor that I know works in my area and whom I suspect knows plenty of people that will do a good job of refereeing my paper. Thus, if your journal has a board full of well-known people that work in trendy areas then it's not too much of a stretch to imagine that before too long you'll be getting papers submitted that concern trendy areas, at least some of which are very strong and are written by other well-known people in the area (which in turn means that people citing their papers are likely to consider submitting their papers to your journal). Upvotes: 5 <issue_comment>username_2: As already discussed in another answer, journals that get a reputation for publishing good-quality papers are likely to receive more good-quality papers. However, you ask what a journal can do to influence this, and stimulate an upwards movement in the journal's reputation. The answer to this must be "make the author's experience with *my* journal better than their experience with the competitors". Then, an author with a paper to submit is more likely to prefer my journal, which means I'm likely to get a larger fraction of the submissions in the field, which allows me to be more picky and cream off the 'better' papers. So what are the things that authors care about? I would suggest (at least) the following: * Efficient management of the reviewing and publication process, with editorial decisions received in a reasonable time-frame and prompt publication of accepted manuscripts; * Helpful, fair, balanced reviews from people who understand the context of the submitted work; * Clearly-communicated editorial decisions that are justifiable based on the reviews received; * Editors who are prepared to 'make a decision' in cases where there is disagreement between reviewers (or between reviewers and authors); * A copy-editing process that does not introduce unnecessary errors or inconvenience. * Any fees (page charges, open access, etc) at a 'reasonable' level In many cases, authors views on a journal will also be coloured by their experiences as a reviewer. Again, similar considerations apply. Note also that journal reputation may also reflect the fashions of individual research fields. If Basket Weaving suddenly becomes flavour of the month, with lots of activity, then the reputation of the International Journal of Basket Weaving Studies is likely to grow rapidly as a result. Upvotes: 3 <issue_comment>username_3: Perhaps one answer not currently covered by other answers is that journals actually don't mind carving out a niche even if the niche is just "somewhere niche papers will be published", i.e. a paper that is strong mathematically but produces a narrow result needs somewhere to go and there's not shame in the journals that accommodate such papers. Another answer (which the OP avoids suggesting!), which is that by behaving in an unethical way journals *raise* their reputation e.g. big name editor at 2nd tier journal A says to big name X : "hey send your next few papers here, we'll treat them well" and then people see that big name X is sending all her stuff to journal A, thus raising the image of journal A. Upvotes: 1
2019/10/16
663
2,916
<issue_start>username_0: I am applying for tenure-track positions in R1 (say, top 50) universities in Computer Science in the US. When writing the cover letter, should I mention teaching experience and competence? Or should I leave it for the teaching statement?<issue_comment>username_1: At an R1 university you will still need to teach. Mentioning teaching in the cover letter is certainly appropriate, but a few words will do. It needn't even be as much as a full sentence since you get to expand in a teaching statement here. But a sentence that simply says you have wide experience in both research and teaching is probably enough for the cover letter. For a different sort of university teaching would be the most important thing to mention, but for almost any academic job, don't try to seem too focused on any one thing. The job has multiple facets. Your application will be evaluated by people with different viewpoints in most cases. Upvotes: 2 <issue_comment>username_2: I've been on the faculty recruiting committee of my American R1 top-whatever computer science department for more than 15 years, including three years as committee chair. (But my experience is only in one department, so take this with a grain of salt.) **To first approximation, nobody will read your cover letter.** It's quite common to get faculty applications that have no cover letter at all, or that include a plain-text email "cover letter" that reads "I'm applying for an assistant professor position; I've uploaded the requested documents and the contact info for my references. Kthxbye!" I have never seen this affect discussions about who to interview or who to hire. Nobody ever notices. Nobody ever cares. All the information we care about is already in your other application materials, your recommendation letters, and your papers. In particular, recruiting committees already expect to see a detailed description of your teaching experience in your CV and in your teaching statement. That's where everyone puts it, so that's where we look. And we'll look there regardless of whether you mention it in your cover letter. So why would we look at your cover letter? In my experience, cover letters are only useful if they contain information that potentially changes the hiring *process* — like "I am applying for a tenured position at the rank of full professor." or "Please keep my application confidential." or "My spouse is also applying to your chemistry department." Cover letters do play a more important role in some departments, because they want convincing evidence that you are seriously interested in a job *there*. After all, if you're just shotgunning your application to every ivy-colored building in the country, then interviewing you is probably a waste of their time and money. But this is much less likely to be a concern in a top-50 R1 department. It's never been a concern at mine. Upvotes: 3
2019/10/16
3,818
16,637
<issue_start>username_0: I'm in my final semester of a Computer Science degree at University and we have general education requirements. I'm in a Public Speaking course with an interesting professor, lets call him professor <NAME> doesn't seem to be actually teaching any material or guiding us on any topics that are related to the book or curriculum we're assigned. He was obviously given all of the online course material, including our online quizzes and study guides. Most of the quiz grades are handed to students, whether we completed them or not. Some are not even available. He was a one year contract with the university, and was hired a mere two weeks prior to the semester beginning. He doesn't know how to use the learning management system and seems to have no interest in learning it. I inquired about an online quiz, and he had no idea what I was talking about or how to get to it. Considering I'm the only senior, I offered to stay after and show him around the site, suggesting that he attach due dates to quizzes so the students will actually do them on time with the reading and he can decide not to accept late assignments if he so chooses. He nodded and hasn't made any changes since. We spend our days in class talking about no particular topics, nothing about the book, and no PowerPoints in sight. He rants about our news habits, how much water we like to drink, and we spend 25 minutes each period doing breathing exercises. The only related material is a monthly speech that we must give, with basically no guidance. My question is whether I should speak to the department chair about his behavior. We're halfway through the course and the LMS hasn't seen a single bit of attention, he doesn't respond to emails of any topic, and asks the class what we want to discuss for the day. It's obviously very freestyle and he's not following any course material. I'm finding it harder and harder to attend class, and increasingly frustrating that I have spent money on this course. He seems very financially worried about his contract being renewed, as he has literally brought this up in class. Should I leave an honest course review and leave it at that, let his own fate decide whether his contract is renewed, or speak with the department chair about my concerns?<issue_comment>username_1: Yes, you should raise this with the department, either with the Undergraduate Director or the Chair. The problem is not <NAME>'s unfamiliarity with the LMS--as a new faculty member, that is understandable. But you should restate what you said here about not teaching the assigned material. You mention online quizzes that were given to him. Are they being offered and graded? If so, is the grading fair? If not, what assignments will your course grade be based on? If the lessons are not being taught, students cannot complete the assignments successfully. That is the kind of issue the department's administrator will be concerned about, because they do want you to get a serious education in every class. And they know that the grading scheme in the syllabus, as well as the learning objectives, are contractual issues that have to be fulfilled. In talking to the administrator, stay focused on teaching performance issues only and avoid anything personal. You are supposed to be learning something critical about the history and practice of public speaking, and you should have an opportunity to demonstrate that learning fairly, through tests or papers. If that is not happening, the department's administrators need to know. The sooner the better. Upvotes: 5 <issue_comment>username_2: > > He was a one year contract with the university, and was hired a mere two weeks prior to the semester beginning. > > > This situation suggests that the university was unable to hire anyone better prepared to teach. They probably know that already. I doubt speaking to the department chair will make a difference. Upvotes: 5 <issue_comment>username_3: It seems this person's public worries about their contract has given you the impression that their job is in your hands. You shouldn't concern yourself with this. What you clearly wish to do is give *feedback*. We can hope it gets to them and they act on it sooner rather than later, but you are one student of many, and I suspect this professor's teaching style will be fairly popular with many others. I'd suggest discussing your concerns with your own academic advisor (you can ask them about whether to do something else next). As for you particular concerns, issues like everyone getting free grades for homework even if they didn't try, or quiz deadlines not being as early as you think they should be, are not really things to complain about; you are free to do the material as early as you see fit, and to work as hard on it as you want. So I'd suggest sticking to the more serious concerns, like not receiving a fair grade (if this were true), or not covering the course outcomes listed in the catalog (assuming this were true). Issues like staying on track with the syllabus, and giving prompt feedback on students performance, are somewhere in the middle; we should always strive to do these things better so feedback is certainly called for. But they probably aren't emergencies that will make administrators jump to action. Upvotes: 2 <issue_comment>username_4: Poor teaching harms not only you but all students and all future students. It harms society, since more of its members are poorly trained and can't do their jobs and tasks properly. This can be annoying (the clerk can't fix your computer) or even dangerous (the doctor prescribes the wrong medicine). Please report continous, bad teaching if it is objectively bad (wrong information, almost no course material, etc.), so the teacher can improve and the students can be properly educated. Of course it is best to talk to your professor first. But you already did. So maybe try it again (this time with specific incidents). If it does not work try to escalate this. Please gather all hard evidence you can get (e.g screenshots from the missing quizes, emails, etc.) and write an ojective review to the department chair. Something like your posts but with specific incidents: > > On October first, we had to do X. The related quiz was not > prepared [1]. This links to course material were not reachable [2]. Our teacher did not > provide printed learning material for this topic. He did not have any power > point slides prepared. Here is a transcript of his speech [3]. It was > completly about drinking water and lasted for 30 minutes. The official topic of this lecture, announced on the cource website, was > "The impact of nuclear power stations on the US electrical grid [4]". > > > So you have specific problems which can be addressed and even better evidence to proof poor teaching. Make screenshots of everything related. One last remark: If a teacher / professor does not want to enable timelimits, it may be part of his teaching style. "Do it, when ever you have time for it". And since everyone is an adult, the attendees should know what is best for them. This is not high school, where you have to motivate people to learn. Upvotes: 1 <issue_comment>username_5: SE won't let me comment with a 1 rep, so I thought I'd add this answer, to provide context and perspective. I feel your pain, and frustration, and not defending the behavior of your professor, but I'm curious if he's on contract instead of being an actual employee? The reason I ask is because I was an adjunct professor for two years and was an expendable contract employee of a certain US state's community and technical college system years ago. They hired contractors because they were cheaper - they paid us around $1000 per course per semester. I received just about zero guidance in teaching my first two classes for this school. They asked me to teach Intro to Computers and some sort of database design course. The first was admittedly a required general ed course and probably half the class was just not interested in being there. What made it worse was the book was just not very good at all. It was fact after fact after fact and it seemed like no one was even reading the book. Those that did probably found it not very engaging - I know I didn't. Sure, there were some interesting facts, but it was just too broad. Trying to give homework and create tests with this book was a difficult scenario because each chapter had *so* much data in it. I think at the time, I tried to set it up so we had tests after two chapters, and that made it worse on the students that didn't care to read the book. So, that all said, the students in the course that didn't want to be there, didn't read the book, didn't come to class, didn't study for the test, and really needed to be back in high school learning how to read and write just blew me away at how poorly they performed, as a student. I was shell-shocked so much that I recall going to open book tests (I can't recall if I started them out with open book tests), but again, there was really too much material. I also started giving them study guides which was basically the material from the book in outline form that they could look at - and refer to during the tests - so that we could at least get more students to pass the class. In hindsight, I didn't handle all this well at all. I talked to a fellow instructor that had been teaching for a while and she gave me some pointers about things like quizzes and the like (so they had a hint about what was going to be on the test) but that wasn't in the syllabus. I should have probably talked to the department heads about how to go about handling this situation, but I didn't. The database design course went great - the students *wanted* to be there, because most were probably going on for a computer-related, or possibly a CS, degree. We designed a school database for housing student records, etc. I got some good feedback and thought it went well. But the required course, ugh. It was painful for me and for them and I had no idea how to fix the problem (at the time). When I stepped into that classroom for the first time, those students *owned* me. They ended up manipulating me almost to the point of helping them pass and I really regret that in hindsight. It wasn't a good experience for any of us, but I did learn that many of them just wanted a passing grade so they could move on. I didn't do a good job - at all. I was much younger and less mature and had no business teaching that class at the time. I wasn't prepared to handle the situation with those that didn't care about academia. I should not have been asked to teach that intro to computers class again, but there was no way to objectively determine I was doing a bad job. We even had a well-known and fine instructor from a bigger university come and speak to the professors for tips and tricks about how to teach better. He was an excellent and engaging speaker and I'm certain students loved him, but he was a seasoned veteran. He even told us that student reviews that slammed instructors were mostly because they probably performed poorly in the class or had a poor or failing grade. So, we just kind of ignored poor reviews, because we had some good ones too. I think my own personal saving grace was that I grew relationships with some of the students in the intro to computers class, not because I was doing a good job teaching that class. I think professor Johnson could be somewhere in my camp. Maybe he shouldn't be teaching. Maybe he needs some direction (clearly, as you've indicated). Maybe he just needed a job and the university needed someone to teach it (maybe because the instructor originally assigned to the class had to bow out). Maybe he's been conditioned by the students so much in his previous experiences that he feels like he *can't* teach anymore, so he just does something else. Maybe he's a really poor instructor. I'm providing this answer to just say there's maybe not a lot you can do about this situation. If you talk to the department head, they may tell you he's the instructor and the course content is at his discretion. If it gets back to him, he may crack down and it could affect other students in the class. If you leave a poor review, they may shrug it off that a student probably ended up with a poor grade and tried to be vengeful. Upvotes: 3 <issue_comment>username_6: The professor is being treated as a day laborer, as most adjuncts are these days, he lacks direction because he doesnt care, does not have to care and neither does the university. Hes being treated as a disposable napkin. His contract ends in 1 year, what does he care? There was a time when adjuncts were supposed to be professionals with professional lives outside the university, such as businessmen or experienced engineers, and they would be invited to give courses to share their experience in the real world, this was in contrast with tenure-track faculty, which required exclusive, permanent dedication to the university and therefore had vast teaching experience but lacked experience in the real world. Both types of professors complemented each other. Nowdays universities just hire inexperienced people as adjuncts and use them as ad-hoc tenure-track replacements just to skip on salaries and pensions and commitment. Upvotes: 4 <issue_comment>username_7: I notice one of your complaints is that the professor fails to use quantitative material. Obviously, natural sciences, engineering, and technology demand quantitative material, because they deal with physical materials which can be precisely mathematically understood. Public speaking, however, deals with human experience; Are you going to model speech engagement as a dynamical system? As a series of differential equations or an ensemble of neural network models? The only quantitative material I can think of is statistics based on polls and surveys, and that just seems unreliable to the point of having no real-world benefit or consequence. Your professor has probably thought about this already, unless you're also calling him a dilettante who doesn't care about his field. Consider, if you will, the vital importance of breathing technique for the development and sustained projection of a powerful, rich, charismatic voice. Too many speakers do not develop the confidence and physical ability to deliver engaging speech, which hampers their impact and makes it harder for an audience to take them seriously, regardless of their credentials or actual knowledge. What does a textbook or online quiz have to do with public speaking anyway? The core competency required is the ability to engage, entertain, and relate to an audience, even (*especially*) in an academic (undergraduate-wise) context; otherwise you limit the transfer of ideas to people who appreciate dry, technical delivery, and ignore the masses. He flits about and discusses random topics with no discernible pattern, ignores the book and doesn't care about quizzes, and seems to provide you with unlimited freedom to design your speeches. If I were you, I would love that freedom and probably overthink the significance of every random topic, and then I would go off and design the best speech I could possibly give on a subject which I genuinely enjoy, and aim to stir the audience into experiencing that same enjoyment. Perhaps the solution is not in changing his behavior, but in changing your perspective, and searching for the underlying purpose in his seemingly aimless behavior. I mean, if results validate approach, then look at the current president of the US -- regardless of politics, he knows how to engage his target audience and inspire their undying loyalty. Upvotes: 1 <issue_comment>username_8: > > Should I leave an honest course review and leave it at that, let his own fate decide whether his contract is renewed, or speak with the department chair about my concerns? > > > Talk to your **student union** about this - preferably to the representative from your faculty/department, preferably from your program-year in that faculty/department. They should have both a better idea of what can be done and the capacity and resources to do it. (Assuming that your student union is not dysfunctional, that is.) If you have the misfortune to study in a university without a proper student union (see @username_1's comments), then - I'm sorry, you're out of luck. Consider organizing at least some sort of student action group among fellow students taking the class, on this matter. Upvotes: 1
2019/10/16
727
3,201
<issue_start>username_0: A common problem I encounter every time I do a literature search on a new subject is determining a reading order for papers. It feels akin to [jumping into a prolific sci-fi fantasy series](https://scifi.stackexchange.com/questions/1381/what-order-should-i-be-reading-the-discworld-books-in/1648), but since there usually don't exist enthusiastic fan bases for a random subset of papers on numerical methods for PDEs, I'm always struggling to figure out how to get started. My default is chronological order (oldest to newest), but this frequently poses problems if the oldest paper is using outdated notation or technology. A newest to oldest reading order has the problem that it might assume background information from previous papers, which leaves me feeling like I have to keep jumping around texts and never really get a coherent picture of any of them. Does anyone have better ideas for determining an "optimal" reading order for related academic papers? Perhaps a simple metric for ranking importance (but not citation counts, since this would skew towards older papers). *Note that when I say ``read a paper," I'm already employing a multi-stage technique where I read the abstract, conclusion, and introduction, then skim the text, and afterwards decide whether it's worth downloading and reading in detail. So the set of papers I have now, I've determined are worth really reading in detail.*<issue_comment>username_1: I think a general answer to such a general question is not possible. It depends on those papers and you don't know *how* it depends until you have some understanding of each of them and how they fit together - the progression of the main idea(s). But I also think you have explained a pretty good general strategy: get a general idea of each and then plunge in where it seems needed. But it is also true that you don't need to *completely* understand *every* idea in *each* of the papers. You started with one of the papers for a purpose, I suspect. Read enough, and understand enough, to meet your purpose. But also, take a lot of notes as you read. If you read on paper you can easily make marginal notes for some of it. But it is also good to write brief summaries of each paper, keyed to your original purpose. Upvotes: 1 <issue_comment>username_2: Not sure if this applies to your particular field but in mine I used to have the same problem. What I discovered (too late in my PhD) is that textbooks and review papers tend to be very useful and can point you to the right papers so that's something worth trying if you haven't already. I also found it more usefult to start with more recent papers and then follow the chain of references back in time on aspects that I found important or interesting. Upvotes: 2 <issue_comment>username_3: I often struggle with the same question. One approach which I find often gives good results is to start by skimming through all the papers (typically in chronological order) and use this first reading to decide on a more appropriate reading order, typically starting with the ones which seem more introductory or survey-like and ending with the more detailed ones. Upvotes: 3 [selected_answer]
2019/10/16
1,352
6,052
<issue_start>username_0: I am a second year STEM PhD student at a large R1 public university in the United States. When I chose to come to this program, the department told me that I would have a teaching load of 3 classes per year. That is, I would be expected to teach two classes in one semester, and one class in the other semester per academic year. The language in our contracts is not in terms of classes taught or credits taught, but a vague statement to the effect of "teaching xx hours per week". Since a statement in terms of hours per week is obviously vague, I inquired about this when I choose to come to this program, it was explained to me that it was simply 3 classes per year and that the load is not as high as it seems on paper. Other graduate students had the same experience. Now, they have changed their promise and are now expecting me, and some other graduate students as well, to teach four classes per year. We are upset about this because we feel as if the department intentionally misled us with the language of the contract. Since the department is not increasing our stipends, we also feel they are exploiting our cheap labor. This has also led to resentment among graduate students since some did not get chosen to teach extra classes, and it was essentially random who got stuck with a higher teaching load. My question is: Is it reasonable to ask the department to increase our stipend or compensation for this additional work? If it isn't, what is reasonable to ask of them? Should we just get over it?<issue_comment>username_1: I think this is unfair, but there is probably nothing that you can do about it. In my STEM department, TAs all nominally are responsible for the same number of work hours (twenty hours per week), but with different courses and different grading responsibilities, it is inevitable that there will be some iniquities—including some students teaching more sections that others. Sometimes students unfortunately get saddled with extra work unexpectedly. Mostly, the students grin and bear it, knowing that we try to have the extra work average out over time (see below). As a practical matter, funding for teaching assistantships at an R1 institution mostly comes from outside the department (probably from the dean's office). There is a set number of fully-supported TA slots available, and the department has a certain number of courses that it needs to have covered. There may be a bit of extra money in the departmental budget, but if the department suddenly and unexpectedly increased the teaching load for some of its TAs, that suggests that the department may be in a precarious financial position, with little to no extra money available to support the TAs who are working harder. It will not hurt to ask for additional money. However, even if there was enough money to pay the overloaded TAs a bit extra, the department would probably say no. If they did start paying some TAs more than others, that could provoke even more ill will and dissension among the graduate students; the ones who were not assigned extra sections would be (quite reasonably) angry that they were not getting the same opportunity to make extra money as some of their fellows. So, I think it is quite unlikely that you will get any extra funding for teaching one extra class section per year. What you can realistically ask for is to have the extra teaching load spread around between different TAs; this year, maybe half the TAs have to duty extra duty, but next year, those additional courses will fall on the shoulders of the other half. The department may already have it in mind to do this, but it would not be a bad idea to get it stated as an official policy goal to handle things this way. One final thing that might be relevant. The amount of teaching that faculty members do in R1 STEM departments is far from uniform. In biology, chemistry, and physics, full load for professors is typically teaching one or one-and-a-half courses per semester. In other STEM areas (geology, mathematics, neuroscience, etc.), the course load for faculty can be about twice that. That means that different kinds of STEM departments have different amounts of slack in what the faculty are available to teach. Depending on your field (and the local departmental culture), it might or might not be viable to have professors take up some of the extra course responsibilities. It is too late for that to change for this semester and maybe the next semester too, but things might be different next academic year. (Again, however, this is probably something that the departmental faculty have already thought about and discussed, so any complaint you make is unlikely to make a great deal of difference in what ultimately happens.) Upvotes: 3 [selected_answer]<issue_comment>username_2: I will guess that xx is something like 20, which is probably the maximum you are allowed to work before the university has to provide you regular benefits. My understanding at my state university is that if graduate students are being made to work more than 20 hours in a single week (not even just on average), then there is a serious violation of university policy, and it needs to be addressed. If the situation is that with your previous teaching load, graduate students were working less than specified number of hours per week in practice, and now are forced to work xx hours per week, there is nothing you can do. However, if you are being forced to work more than xx hours per week, or perhaps you can't do a good job working only xx hours per week, then you should definitely do something. My first suggestion is to discuss this with your fellow grad students (and any grad student association if available), and document how many hours people are working per week. Then bring this information to chair and graduate director. Hopefully they will work with you to find some sort of solution. If not, then you should explore other options like speaking with the dean, ombudsperson or if necessary the media. Upvotes: 0
2019/10/16
954
4,140
<issue_start>username_0: Or alternatively do good PhD students and researchers spend some part of their day, regularly, reading new research papers and pre-prints? Please give some reasoning of why and why not, if possible. Background for those who are interested: I am a physics student. I am new to publishing research papers and research in general. Just around a week back my summer intern work was submitted to a journal and a pre-print was made public in arXiv (I am the corresponding author). This was my first experience with publishing a paper, so I didn't know what to expect (except for corrections, major/minor revision notification from the journal ;[ ) However, within 12 hours of being public in arXiv, I received a mail from a PhD student and the next day I received a mail from another PhD student. One of the mails was about how his research paper (published a month back) disproved a popular notion which we had mentioned in our discussion and the second one about his research paper and possibility of matching both our results. Both of these PhD students were from two very very prestigious universities, so I was quite surprised. While I expected people asking for clarification or pointing out mistakes I did not expect such a practice. Is it common for all PhD students to do this? Or do good researchers do this on their own? (Which might explain why these two are in top schools). But the topic of this research is very hot with lots of new papers being published every week. So I can't imagine how one would achieve it. Also, until now, I always thought that we will have a specific problem for our PhD, so I am not sure in what way this'll be useful. (I know one has to do literature study before starting.) Since I would be applying for graduate schools next year I though this might be something I should practice, if it helps me as a researcher.<issue_comment>username_1: There are pages like <http://www.arxiv-sanity.com/> to filter the mass of papers on arXiv and give you the chance to read what interests you most. Furthermore, you can also go through all recent papers on your phone (instead of browsing instagram, reddit or whatever), read the abstract and mark everything that might be interesting to later read. I'm not sure if it is expected, that might strongly depend on your institute, group or advisor, but it is surely possible to do it in nicer ways than by simply clicking through arXiv directly. I know good researchers who check arXiv daily, I know other just as good ones that haven't looked at it in months, so there also is no general rule there for after the PhD it seems. Btw, if anyone knows of other nice ways to filter the arXiv feed, feel free to share/add, I'm not yet 100% happy with the ones mentioned that I currently know. Upvotes: 3 <issue_comment>username_2: It sounds like you are not yet a graduate student. Otherwise I'd suggest that you talk to your advisor about this. In the future, if your advisor suggests that you read something, and perhaps comment on it, then you should do so. The theory is that it is always good to make your advisor happy. But a random paper from a random person, especially another student, should put no obligation on you. Especially if you are asked for comment. But you have made yourself "public" by submitting your paper, and therefore a target for such requests. Ignore them if you like or read them if you find them interesting, and comment if you like. But there is no implied obligation. One reason for sending out such requests is a bit sinister. The others may just be "fishing" for citations in your future work. In a more positive light, knowing about this other work helps guide you away from problems already solved or about to be solved. However, it is also true that graduate students do need to spend time, perhaps a lot of it, reading papers. But those papers should relate to your own research trajectory, not necessarily those of others. Later in your career you review the work of others as an implied public service to the profession, but it is too early to expect that now. Upvotes: 3 [selected_answer]
2019/10/16
543
2,409
<issue_start>username_0: A few years ago I published a book on mathematics and physics. This book received positive feedback from people who read it, but was never widely read and has not sold that many copies. At this point, I would prefer if the book were just made freely available as it is probably never going to make a huge amount of money anyway, so I was wondering if I could make some alterations to the manuscript and then upload it to Arxiv with a summary of the content and the intended reader? Would this be copyright infringement as I suppose I have already signed a contract with the publisher? I don't remember the details of the contract, but I suppose the publisher would not want me to do this, as it means people who might happen to want to read it will do a Google search and see that there is a free PDF copy, whereas the publisher would want to make money from the book wherever possible. Is there something about Arxiv which sidesteps this copyright issue, as I am uploading a slightly different document to Arxiv which will exist independently of the physical book.<issue_comment>username_1: If your publisher cares, then it would be dangerous to do this. They might be willing to give permission if they no longer get revenue from the book. They might even respond positively to a request to give the rights back to you. "Slightly different" is an interpretation. I doubt that your interpretation and that of a publisher/rights\_holder would be the same. But ask. Don't assume anything. Upvotes: 3 [selected_answer]<issue_comment>username_2: It's very dangerous to say "I suppose I have already signed a contract". This is legal stuff. The exact wording is important! If you have signed a contract, *take it out and read it*. What exactly did you agree to? The odds are good you'll have to ask the publisher about whether you can do this. Even if the contract said you can't do this, it's possible they'll still say yes, especially if they've stopped actively trying to sell the book. Nonetheless, they might impose other restrictions - for example they might say you can only post the raw manuscript (i.e. no redrawn figures, no title & half-title pages if they were prepared by the publisher, no book cover [again if it was prepared by the publisher], no special fonts ...). In any case asking the publisher is never going to be wrong, so I recommend doing that. Upvotes: 3
2019/10/16
429
1,814
<issue_start>username_0: To make literature search simpler and more efficient, I'm looking for a good tool that would come up with a list of publications based on a given set of publications/research areas. So far, I've found [Mendeley Suggest](https://www.mendeley.com/guides/web/05-mendeley-suggest) but it does not seem to work all that well. I was wondering if someone in this community might know about a better tool or an altogether better approach for pursuing efficient literature search.<issue_comment>username_1: If your publisher cares, then it would be dangerous to do this. They might be willing to give permission if they no longer get revenue from the book. They might even respond positively to a request to give the rights back to you. "Slightly different" is an interpretation. I doubt that your interpretation and that of a publisher/rights\_holder would be the same. But ask. Don't assume anything. Upvotes: 3 [selected_answer]<issue_comment>username_2: It's very dangerous to say "I suppose I have already signed a contract". This is legal stuff. The exact wording is important! If you have signed a contract, *take it out and read it*. What exactly did you agree to? The odds are good you'll have to ask the publisher about whether you can do this. Even if the contract said you can't do this, it's possible they'll still say yes, especially if they've stopped actively trying to sell the book. Nonetheless, they might impose other restrictions - for example they might say you can only post the raw manuscript (i.e. no redrawn figures, no title & half-title pages if they were prepared by the publisher, no book cover [again if it was prepared by the publisher], no special fonts ...). In any case asking the publisher is never going to be wrong, so I recommend doing that. Upvotes: 3
2019/10/16
669
2,844
<issue_start>username_0: I think the title is fairly self-explanatory. I am wondering if any recent studies (past 20 years or so) have been published that look at possible correlations between age and the 'success rate' of PhD candidates? I understand that 'success rate' may be somewhat subjective - I'm thinking of possible metrics such as dropout rate, years to completion, no. of papers published, impact factor 5 years after graduation. The background to this is that I am 37 and planning to apply to Engineering PhD programs in my area this Fall. However, I had a casual conversation with someone recently who is connected to one of those schools, who commented that there might be some thinking on the part of an admissions committee that I may be 'past my peak' for their PhD program (they weren't representing it as their own personal view). However, this seems at odds with what I have read online and advice I have received from a couple of Professors that I know (not related to these schools), which indicates that more mature PhD candidates tend to do well and are quite highly regarded. Now, I'm not trying to point fingers or accuse anyone of ageism here. For all I know, my contact's suspicion might be mistaken, or I may have misinterpreted. But, it occurs to me that it would be great if I could point to some hard data that makes a persuasive case that mature PhD candidates on average perform at least as well as fresh graduates, which might help to pre-emptively 'head off' any doubts regarding my age.<issue_comment>username_1: If your publisher cares, then it would be dangerous to do this. They might be willing to give permission if they no longer get revenue from the book. They might even respond positively to a request to give the rights back to you. "Slightly different" is an interpretation. I doubt that your interpretation and that of a publisher/rights\_holder would be the same. But ask. Don't assume anything. Upvotes: 3 [selected_answer]<issue_comment>username_2: It's very dangerous to say "I suppose I have already signed a contract". This is legal stuff. The exact wording is important! If you have signed a contract, *take it out and read it*. What exactly did you agree to? The odds are good you'll have to ask the publisher about whether you can do this. Even if the contract said you can't do this, it's possible they'll still say yes, especially if they've stopped actively trying to sell the book. Nonetheless, they might impose other restrictions - for example they might say you can only post the raw manuscript (i.e. no redrawn figures, no title & half-title pages if they were prepared by the publisher, no book cover [again if it was prepared by the publisher], no special fonts ...). In any case asking the publisher is never going to be wrong, so I recommend doing that. Upvotes: 3
2019/10/17
993
4,298
<issue_start>username_0: Let's say that I take a position at a university, and the university gives me a grant of $X for 2 years. I use the grant to buy a bunch of equipment and make some international trips, which costs me three quarters of $X. After one year, I decide to quit to take another job elsewhere, and I cannot take the grant with me as it was from the university. What typically happens in this situation? Is it possible for the grant to get prorated ($X/2 per year), in which case I would have overspent my budget by a quarter of $X? If not, are there any other repercussions?<issue_comment>username_1: Most grants are not given in a lump sum, but in a sequence of increments---typically 12-months increments, but sometimes shorter or longer. It is true that when you get a chunk of money for a increment, there is typically quite a bit of flexibility in how you spend it within a phase (though some funder require notification if you go significantly "off plan"). You cannot spend more than you've been allocated, however, and asking for an increment in advance is typically either impossible or else requires negotiation and approvals from the funder. In your hypothetical situation, then, if the funder (university or external) gave all of the money up front, and the spend plan was within contractual bounds, they really have no place to object if you happened to spend it faster than anticipated (assuming no other regulations were violated). However... the money was never actually *yours*. It belongs to your institution in some way (probably via your department via appropriate contracting and accounting personnel), and you probably never even had the right to sign for the money. You were just given authority over allocating its expenditure. As such, when you leave, the remaining money (and any tangible assets, like equipment) stay in place. The money will likely go away, since the terms of the grant specified key personnel (you) who are no longer available, but if the parting is friendly enough it may be possible for somebody else to take responsibility and the project to continue with you as a subcontract or consultant. Reputational consequences based on the particulars of how you quit, of course, are another matter entirely. Upvotes: 2 <issue_comment>username_2: The situation you describe is a risk that all universities take when they hire faculty and give them startup packages/grants. Yes, sometimes it happens that people leave after only a year or two after having spent some of their startup money. Well, that’s just life. If your university gave you funds to use and you used them, and did so in good faith, the people at your university will just have to suck it up, though it’s quite possible they will be upset, and their level of upsetness will be correlated with how much money you spent, and, especially, how much money you spent when you already knew you would be leaving. E.g., being seen to spend a lot of money on frivolous things right before announcing your departure will be an obvious bad look, and a sign of bad faith and dishonesty. It is likely that you will suffer potentially significant reputational damage in such a scenario. As for your arithmetic exercise of comparing $X/2 with three quarters of $X, I’d say it’s irrelevant. Unless explicitly stipulated otherwise, “grant for two years” (in the context of internal funding from your own university, at least in the US) generally means “here’s $x, you have two years to spend it”. There is simply no expectation that you only “earn” $X/2 with each year of work. In fact, if you leave after *two* years after spending all of the $X, and $X is a large number, then the university will likely still be somewhat upset about this, even though the arithmetic suggests this is “fair”. Finally, one important caveat is that you should carefully read your offer letter and related documents you signed when joining and see whether there are any conditions attached to the $X grant. Sometimes startup packages have explicit conditions that you have to return part or all of the money if you leave the institution after less than a certain number of years. Disclaimer: I am not a lawyer, and your question may have aspects that only a lawyer can answer authoritatively. Upvotes: 2
2019/10/17
1,833
7,688
<issue_start>username_0: I've recently noticed a trend of big names in my sub-field starting new journals whose scopes substantially or fully overlap with pre-existing journals, and the motivation is not very obvious to me. To be clear, these are intended to be reputable journals with rigorous peer review, hosted by major publishing houses such as Wiley or Springer-Nature, so I guess it's not a money-grab or anything similar. To give some more background, my sub-field already has 4 well-established journals devoted to it, as well as >10 journals whose scope intersects with my field (think bioanalytical chemistry vs. analytical chemistry), not to mention the Science and Nature-level journals and the catch-alls such as Scientific Reports. Many of these are highly-regarded society-led journals, with 20+ year histories and the journals all have stable impact factors that range from incremental (1-2), standard (3-5) to relatively high impact (6+), accommodating a wide variety of work. Thus, it seems a bit mysterious to me why someone would go to the substantial effort and expenditure of personal political capital to create a new journal, harangue your colleagues to submit, convince people to review, etc., when there isn't a clamoring need in the community for yet another journal?<issue_comment>username_1: Being able to set editorial policies can have a significant impact on how science is done in the field. Some examples of such policies include: encouraging the use of preprint services, mandatory data/code sharing, transparent peer review process, encouraging replication studies, etc. For the big names who are not satisfied with simply setting the direction of their own labs, setting up a new journal can be another ideal career ambition. Upvotes: 4 [selected_answer]<issue_comment>username_2: > > hosted by major publishing houses such as Wiley or Springer-Nature, so I guess it's not a money-grab or anything similar. > > > These are for-profit publishers. They will not publish unless they expect to make a profit from subscriptions or publication charges. > > big names in my sub-field starting new journals > > > They do it to increase their own fame. Upvotes: -1 <issue_comment>username_3: Great question. I've asked various people about this and thought about it myself (I used to work in publishing). My sense of this is the correct question is less "why?" and more "why not?". Here're some factors in the macro-publishing view: * Global publishing is increasing and has been increasing for a while. [This report](https://www.stm-assoc.org/2015_02_20_STM_Report_2015.pdf) for example finds that "The number of articles published each year and the number of journals have both grown steadily for over two centuries, by about 3% and 3.5% per year respectively"; furthermore, "The reason is the equally persistent growth in the number of researchers, which has also grown at about 3% per year and now stands at between 7 and 9 million." * It's common, especially for researchers in first-world countries, to think that good papers will be published in good journals and everything else isn't worth reading (you didn't write exactly this, but by writing "highly regarded" journals, you imply something similar). Lucky them - many authors from developing countries will never publish in one of these top journals. For example in many developing countries a performance indicator is number of papers published in SCI-indexed journals ([example from a brief Google search](https://www.researchgate.net/post/Does_your_institution_pay_you_in_case_you_publish_your_research_in_some_of_the_leading_journals)). If that sounds like nonsense to you because all your papers are already published in SCI-indexed journals, you're one of these privileged authors. * Why can't these authors publish in top journals? From the journal's point of view, because the papers aren't very interesting/novel/high-impact. From the authors' point of view, it could be because they aren't as capable as those people working in first-world countries; equally possible is that they just don't have as many resources. For example without access to LHC data, one is not likely to make breakthroughs in particle physics. So one ends up with boring, low-novelty papers that will never get published in top journals. * These papers still have to go somewhere, however. Where there's demand, someone will come up with a way. A few more factors from the publisher's point of view: * You may not be aware that many of these not-so-good journals are actually solidly unprofitable. It's absolutely not a money grab. The flip side is, they also don't lose *that* much money. Sure their revenue is low, but their expenses are also low (usually because they publish relatively few papers). * Publishers are well set-up to cope with changes in the number of papers received (they have to - the number of accepted papers is not by any measure a constant). That's why many desk editors handle both journals and books. When there are fewer papers than average they spend more time on books; when there are more papers they outsource to freelancers. The result is that estimating the cost of a new journal is a difficult process. There's some estimate of how much each paper is going to cost, but the last journal I helped set up literally cost nothing in extra personnel. It was simply assigned to one of the desk editors and he incorporated it into his duties. * The cost can become substantial if the journal starts publishing a lot of papers; however if the journal actually manages to do that then chances are it will become profitable. (High volume is a key driver in getting indexed by databases such as Web of Science, which in turn is a key driver in getting more submissions & subscriptions, etc.) * Based on that then the cost of the publisher setting up a new journal is pretty low. Therefore if a group of academics come along and suggest starting a new journal, the publisher is likely to be interested. Why not? You risk a relatively small amount in the hope that the journal works out, when it does become lucrative (since it's a reliable and recurring source of revenue). * Besides, even if the journal fails, you've hopefully built contacts with a few more people who might collaborate with you in the future, e.g. by writing books. And then there's the academic end. * Sometimes the would-be editors think they've genuinely identified a niche in the journal market. * Sometimes they're switching publishers. For example the [Journal of Topology](https://en.wikipedia.org/wiki/Journal_of_Topology) was started by the disaffected editorial board of *Topology* when they switched publishers. * Other times they're doing it for prestige. For example, another journal I helped set up involved the publisher identifying what they thought was a niche and writing to an expert in that niche. The professor was agreeable but wanted to name two of his postdocs as associate editors. Talking to the two postdocs, it sounded like they were doing it in part because it's prestigious, and in part because it's a novel challenge they've never had to deal with before. * As for the professor himself, he was paid a fairly substantial honorarium to handle the journal: several thousand USD a year. I don't think he agreed to the journal because of this money, he probably also felt it was a reasonable idea and he could contribute to it, but the money probably influenced his decision. **Edit**: One more thing - being able to point to a successful journal later in one's career and be able to truthfully say "I had a hand in starting that journal" is *hugely* satisfying. Upvotes: 2
2019/10/17
938
4,052
<issue_start>username_0: I'm applying for a graduate position. One professor answered my email and asked me some questions. Two of them are listed below: > > What are your short- and long-term academic/professional goals? > > > What are your short- and long-term personal goals? > > > I can talk about my professional goals in the first question, but I don't know how to answer the second question. To what extent should my goals be personal?<issue_comment>username_1: This sounds like an inappropriate question. This professor doesn’t seem to understand the meaning of the word “personal”, which is defined to mean “of or concerning one's private life, relationships, and emotions rather than matters connected with one's public or professional career”. I can’t tell you how you should answer given the somewhat delicate situation. But if someone I don’t know asked me what my personal goals were, I’d tell them that’s none of their business. Upvotes: 0 <issue_comment>username_2: I agree that the question could be interpreted as inappropriate. But, while it's impossible to know the professor's intention (were they trying to screen out applicants who want to start families, etc.?), I think it's safe to answer the question ASSUMING that they are interested in your academic goals that may not be directly career related. For example you might answer the first question with a straightforward goal for your career path (I want to be a tenured faculty, I want to work in research, I want to get a sales job with a technical company, I want to start my own company, etc. etc.). The second question then would be more open ended discussion of what you personally want to get out of your graduate degree (I want to learn to become more independent, I want to learn how to teach myself anything, I want to satisfy my curiosity about how ants communicate [or whatever]). These are personal goals that are related to your graduate training, but aren't necessarily about your future career per se. It's personal preference, but I would probably steer clear of goals or ambitions clearly unrelated to your relationship to the professor (I want to earn a blackbelt, I want to be an Olympic ping-pong player, etc.). I suggest this because it's unlikely that the professor cares about these goals, and the wrong answer has a possibility to seriously hurt your chances of being selected. For example, it's hard to imagine a professor thinking to themselves: 'well their theoretical biophysics training is a bit weak, but we need a good ping pong player for the departmental retreats...'; it's easy to imagine a professor - EVEN INADVERTENTLY - thinking 'wow that olympic training is really going to cut into their hours in the lab...' Upvotes: 2 <issue_comment>username_3: I'd like to argue that this is to some extent country-specific. No country was mentioned in the original post, though. Applying to a concrete position while mentioning "graduate admissions" sounds compatible with the German academic system, for which the following answer is particularly applicable. The professor *may* simply be asking if long-term you plan to work in industry or in Academia and has worded the question poorly by excluding work outside of academia from "professional" and making the decision whether to stay in academia or not "personal" (the questions may have also been rephrased by the OP, in which case some subtleties may have been lost). Even if that was not the intended purpose of the question, assuming that this is what is meant may be one way to avoid refusing to answer that question. Asking a candidate about long-time carrer goals may be of relevance because of funding - the professor may have an ongoing collaboration research project with industry, and such projects are very useful for those PhD students wanting to move to industry anyway at some point. They are sometimes or frequently less well suited for those needing publications in the best venues, which should be avoided for those interested in an academic career. Upvotes: 0
2019/10/17
871
3,472
<issue_start>username_0: *[Notes and Queries](https://academic.oup.com/nq)* is a very famous and very long-running humanities journal, noted for its pioneering question-and-answer format. It's spun off a number of regional editions, a series of anthologies, and given rise to countless imitators. I'd like to know whether (and if so, which types of) contributions to *Notes and Queries* are peer-reviewed. The journal's own website is silent on the matter—the only reference I can find to the review process is a short statement in the [guidance for contributors](https://academic.oup.com/nq/pages/General_Instructions) that professional editing can help non-native speakers of English "ensure that the academic content of [their] paper is fully understood by journal editors and *reviewers*" (emphasis mine—and note that this statement refers only to "reviewers", not "peer reviewers"). And third-party sources give conflicting or incomplete information on the journal's peer-reviewed status. For example: * The journal index of the [MLA International Bibliography](https://www.ebscohost.com/titleLists/mla-coverage.htm) lists *Notes and Queries* as peer-reviewed, but doesn't indicate whether this applies to all types of submissions. (Maybe only the peer-reviewed submissions get indexed.) * Various author-contributed reports on the [English Literature Journals](https://humanitiesjournals.fandom.com/wiki/English_Literature_Journals#Notes_and_Queries) page of the [Humanities Journals Wiki](https://humanitiesjournals.fandom.com/) mention a (sometimes lengthy) review process for submissions, but they never mention whether this is peer review or editorial review. * [An informal but fairly lengthy review of *Notes and Queries*](https://forums.signumuniversity.org/index.php?threads/journal-highlights-notes-queries.2639/) by <NAME> (then a graduate student at Signum University) states that the journal is not peer-reviewed. So what's the story here? *Notes and Queries* clearly has some sort of review process, but is it peer review? And if so, which of its four types of contributions (Notes, Queries, Replies, and Reviews) are peer-reviewed?<issue_comment>username_1: Having just had a piece accepted for publication in the journal, albeit without having received copies or even mention of any reviews, I decided to ask the editorial assistant, <NAME>, whether submissions to *Notes and Queries* undergo peer review. This is her response in full: > > *N&Q* is a peer reviewed journal. Your Query has been reviewed, though Queries do not always need a detailed review. > > > Upvotes: 3 [selected_answer]<issue_comment>username_2: *Notes and Queries* is listed in Web of Science's Arts & Humanities Citation Index (AHCI). This fact indicates that the journal's "primary research articles must be subject to external peer review" (according to [Web of Science's evaluation criteria](https://clarivate.com/webofsciencegroup/journal-evaluation-process-and-selection-criteria/)). As Web of Science regularly evaluates its journals, one can probably trust that *Notes and Queries* indeed has a peer-review procedure, at least for its "primary research articles". On the other hand, if you look at [Publons' page on *Notes and Queries*](https://publons.com/journal/17053/notes-and-queries), one cannot see any confirmed peer reviews yet (it says "< 10" [less than ten] reviews are confirmed, but that can be a range from zero to nine). Upvotes: 1
2019/10/17
589
2,107
<issue_start>username_0: What is the name of this reference style? In which other fields have you seen it? [![enter image description here](https://i.stack.imgur.com/5E7qd.png)](https://i.stack.imgur.com/5E7qd.png) It replaces author names (in the list of references) with a long underscore when it's the same authors as in the previous item. Also, numbers of the last page are abbreviated if there is too little change. With the latter I am not sure it's part of the reference style, or just a convention. I found this reference in Economics, namely here: <https://www.jstor.org/stable/pdf/2118200.pdf><issue_comment>username_1: I have seen this style in a number of older IEEE publications (in *Computer Science* and related fields). Most commonly, **I have seen this reference style used in *old publications***, and rarely on anything published after 2000. For another example you can check: > > [<NAME>, and <NAME>. "Watersheds in digital spaces: an efficient algorithm based on immersion simulations." IEEE Transactions on Pattern Analysis & Machine Intelligence 6 (1991): 583-598.](https://pdfs.semanticscholar.org/a381/9dda9a5f00dbb8cd3413ca7422e37a0d5794.pdf) > > > From looking at [this related question](https://academia.stackexchange.com/q/97402/4249), it seems like it is possible to use this convention in a number of styles (examples in the linked question are provided in both MLA and Chicago citing styles). They further mention that the style is common for *historical books in math*, used by most of the *humanities in the US* as well as *US high-schools* (from where it could reasonably propagate to the higher levels of education). Upvotes: 3 [selected_answer]<issue_comment>username_2: I don't really agree with the other answer. It's still used in recent publications, and not just in the US. I've recently seen this in the *Annales Mathématiques Blaise Pascal* (random French example) and all AMS publications until very recently (at least 2013, although they changed the style between then and now). "I've never seen this" doesn't really prove anything. Upvotes: 3
2019/10/17
523
2,328
<issue_start>username_0: I may have a postdoc job offer. The professor for the postdoc still hesitates to make a final decision and asked me if I can confirm my commitment (working for at least one year). I have finished one-year postdoc (not at the professor's institution) and have been very actively looking for faculty positions. If I make the commitment, does that mean I have to stop applying faculty positions, at least for the first year? I am not sure if this is common practice for postdoc-supervisors.<issue_comment>username_1: Don't stop exploring other positions until you have a firm commitment from the professor. The situation is mutual. You both have to agree before anything is finalized. Until then you are still free to consider all options. By firm commitment, I mean something that you can put your signature to. Upvotes: 3 <issue_comment>username_2: Don't stop exploring other positions, as long as you give the current prospective position the same amount of attention as if you weren't applying for other positions. Remember that the academic hiring cycle is usually several months long. Applications you submit now will land you interviews in the spring, with jobs starting in the fall - nearly a year from the application time! If the professor is only willing to give you a one year commitment, you may need to start applying for faculty jobs this cycle to have a job lined up when their commitment to you ends. In that time period it's also to your benefit to work hard turning the crank and getting out publications that you can show off during your on-site interviews. Thus, your goals (to get publications to land a job) and your prospective boss's goals (get publications out of the postdoc for the next grant submission/tenure review/...) are well aligned. Also keep in mind that postdoc is a training period, not just a purgatory you must suffer through to get the faculty position. Presumably you're taking this new position because you believe this professor has the ability to help you get high-level publications and can teach you something about how to be a successful faculty member. Amongst all the applications, interviews, research, etc., don't forget to make the most of your current position and learn what you can from this prospective supervisor. Upvotes: 4 [selected_answer]
2019/10/17
535
2,429
<issue_start>username_0: I am planning to apply to a specific master program in Finance. I do not have any experience in Finance and have never worked in any project related to Finance. But I am very interested to study it because I want to enlarge my research area (which is in signal processing) and have more job opportunities in the future. I really believe that I can excel in the program if I study it. Can you please help me in writing two or three sentences that can motivate my interest to a program although that I have no work experience on it? I am trying to write a cover letter and I need your help.<issue_comment>username_1: Don't stop exploring other positions until you have a firm commitment from the professor. The situation is mutual. You both have to agree before anything is finalized. Until then you are still free to consider all options. By firm commitment, I mean something that you can put your signature to. Upvotes: 3 <issue_comment>username_2: Don't stop exploring other positions, as long as you give the current prospective position the same amount of attention as if you weren't applying for other positions. Remember that the academic hiring cycle is usually several months long. Applications you submit now will land you interviews in the spring, with jobs starting in the fall - nearly a year from the application time! If the professor is only willing to give you a one year commitment, you may need to start applying for faculty jobs this cycle to have a job lined up when their commitment to you ends. In that time period it's also to your benefit to work hard turning the crank and getting out publications that you can show off during your on-site interviews. Thus, your goals (to get publications to land a job) and your prospective boss's goals (get publications out of the postdoc for the next grant submission/tenure review/...) are well aligned. Also keep in mind that postdoc is a training period, not just a purgatory you must suffer through to get the faculty position. Presumably you're taking this new position because you believe this professor has the ability to help you get high-level publications and can teach you something about how to be a successful faculty member. Amongst all the applications, interviews, research, etc., don't forget to make the most of your current position and learn what you can from this prospective supervisor. Upvotes: 4 [selected_answer]
2019/10/17
2,173
8,819
<issue_start>username_0: I’m a college student. My class has only 40 students. My professor left the classroom before class ended because, like he said, too many students were using phones while he was talking. It was not like there were phone sounds or the phones rang. He said before leaving that he would let us teach ourselves. In this case, I’m quite curious about something: 1. Would he be able or be allowed to not teach, and let us handle it ourselves and only come in to do the test? 2. Would he be able to fail the whole class? 3. Would he possibly react the same way if only 1 student used their phone without making any noise?<issue_comment>username_1: I've occasionally done "shocking" things to send an important message. It might be that all he intends is to make it a dramatic statement that you (the class) should take it more seriously. I doubt that he intends to not return, and also doubt that he would get any administrative support for that. I've always been willing to fail the entire class. Also willing to give full marks to the entire class. But that depends on individual, not group, behavior. Again, failing person A because of the actions of B would be unethical and would draw no administrative support. If it is only one student then, perhaps, a more nuanced and less dramatic response would probably be called for. In one of my "dramatic explosions" not every student was guilty. But even the ones who were more conscientious got an important message about proper behavior. In this case it was more about their own lack of preparation. There is a scene in *Stand And Deliver* in which the teacher, <NAME>, denies entry to the classroom to a student who doesn't have his homework paper ready to turn in: "If you don't have a ticket, you don't get to watch the show." But hopefully, the need for such drama is infrequent. Upvotes: 5 <issue_comment>username_2: > > Would he be able or be allowed to not teaching, let us handle it ourselves and only come to do the test? > > > Depends on your university's policies, but in general no. Instructors of record have wide latitude over how they manage their classrooms, but refusing to teach for weeks at a time would almost certainly be over the line at virtually every institution. > > Would he be able to fail the whole class? > > > Some professors have [tried](http://www.insidehighered.com/news/2015/04/27/professor-fails-his-entire-class-and-his-university-intervenes) [this](https://nextshark.com/flamingo-kuo-taiwanese-professor-fails-his-entire-class-after-one-negative-feedback-from-a-student/) and the university generally intervenes. Certainly failing an entire class because of a few bad actors is difficult to defend. If there is a reasonable final exam and not a single student manages to pass it, then failing the entire class could be an appropriate outcome. However, this would reflect poorly on the professor's teaching skills (and/or the university's admissions policies). > > Would he possibly react the same way if only 1 student use the phone without making any noise? > > > You'd have to ask him. Certainly this reaction would be more difficult to justify in such a case. Upvotes: 3 <issue_comment>username_3: It depends on the country and if the school is public or private. If the teacher is considered an employee and does that then he can be charged for abandoning work or breaking his contract of services with the school if the entire class goes to complain to the school administration, in which case the teacher could even be fired. BUT on the vast majority of situations... yes, he can. To answer the questions: 1-Would he be able or be allowed to not teaching, let us handle it ourselves and only come to do the test? R- Yes. However petty and childish that is he can and would. Of course he should be reported, but best the student can do is ask another teacher of the same subject/class to allow you as listener (just to enter class and hear the lecture). Modern times do require modern solutions too, so making a scandal in social media might help. On the other hand, it could be argued that he was teaching you all a lesson. 2-Would he be able to fail the whole class? R- Yes. He can, or fail someone he doesnt like. In which case the whole class or the affected ones can make an official petition or procedure, based on the class's plan deliverable homeworks, to be graded by an external source. That is a normal procedure in most universities. 3-Would he possibly react the same way if only 1 student use the phone without making any noise? R- That is hard to answer without knowing the person, but petty behavior does come from petty people, so yes, he could, or he might try to use that student as an example for everyone. Having said that, you should consider that its basic respect and manners to not be messing with your phone when someone is speaking to you. More so, it's a class where you are supposed to be learning, so you'd be wasting the chance. If he did stated at the start of the course that he would not allow using phones, then the class is at fault. However, to take into account is that there are many teacher, the older ones specially, that do not understand just how integrated is modern learning and technology. You could be taking notes on a tablet/notbook/smart-paperbook, check what he says and references on the phone, sending the photos of the whiteboard to the class's chat group, using the cloud to share the materials, etc . If that was the case, the whole group needs to speak to the teacher about it and get clear what is expected or allowed. Upvotes: -1 <issue_comment>username_4: This could be a defensible action if *everyone* in the classroom were on their phones. As other answers have pointed out, professors tend to be given rather wide latitude by their institutions (especially if they're tenured), and there may arguably be some value in "shock-and-awe" style approaches like this one when used sparingly. However, the sticking point here is this sentence, which I've modified for emphasis: > > My professor left the classroom before class ended because, like he said, **too many students** were using phones while he was talking. > > > "Too many" does not mean all. Therefore, what the professor did was unfair and inappropriate. If there's *one* student sitting there in the classroom who honestly wants to learn and is doing everything they can to pay attention, then the professor is obligated to perform their job out of respect for and contractual obligation to that student. It is not the fault of the students who are *not* using cell phones that "too many" of their peers are using cell phones. There isn't anything you can reasonably expect the non-cell-phone-using students to do in order to get their cell-phone-using peers to stop—and certainly not during the class. Therefore, it is unreasonable to punish the non-cell-phone-using students for their cell-phone-using peers' actions. Aside from that and whether or not it is permissible, I'd judge this as a major over-reaction, assuming that what you said about the lack of disruptions is true. At the university level, students are mature enough to be responsible for their own education and decisions. Instructors can and should support them, encourage them, and possibly even *cajole* them into making the correct decisions. But the students are still ultimately responsible for making and paying the price for their own decisions. Therefore, if a student wants to sit in a lecture without paying attention, that's really their prerogative. It only becomes something that the instructor needs to address if their choosing-not-to-pay-attention becomes a *distraction* for other students who are trying to pay attention. As, for example, would happen if phones were ringing or otherwise making noise. And at that point, the "dramatic gesture" would be to kick out those students who were creating a distraction, then continue the lecture for the benefit of those students who wanted to hear it. Upvotes: 1 <issue_comment>username_5: > > Would he be able or be allowed to not teach, and let us handle it ourselves and only come in to do the test? > > > It wouldn't come to that. He "counter-provoked" you to realize you (= the students taking the class) were crazy for not making sure that class can be given without interruption, and that it would lead to ruin. Probably and hopefully you will take the hint, individually and collectively, and next time there will be no or almost-no students busy with their phones. > > Would he possibly react the same way if only 1 student used their phone without making any noise? > > > No. He would either ignore it or call out that student and make him/her put their phone away. Upvotes: 1
2019/10/18
2,257
9,275
<issue_start>username_0: I am applying to biomedical science PhD programs and have been working on personal statements for a while now. Many of the programs seem to have a very similar lineup with how they want the statement to look. However, they're usually ordered around a little differently. I've found that there's a very good flow to my statement by starting with what influenced me (e.g. school/past research/etc.), current interests/faculty I want to work, career goals, research XP, and a closing statement that's usually geared toward why the school is a good fit. While this structure always varies between schools, I feel it's best done in how I sort of described here. Every now and then, a school is like "tell us why you like the school, tell us research experiences, tell us what motivated you, then tell us faculty you want and what you learned from research." This is sometimes hard to follow and ends up making my statement structure awkward. Is it okay to not answer the personal statement points always in the order that they list the "points to consider including?" Thanks<issue_comment>username_1: I've occasionally done "shocking" things to send an important message. It might be that all he intends is to make it a dramatic statement that you (the class) should take it more seriously. I doubt that he intends to not return, and also doubt that he would get any administrative support for that. I've always been willing to fail the entire class. Also willing to give full marks to the entire class. But that depends on individual, not group, behavior. Again, failing person A because of the actions of B would be unethical and would draw no administrative support. If it is only one student then, perhaps, a more nuanced and less dramatic response would probably be called for. In one of my "dramatic explosions" not every student was guilty. But even the ones who were more conscientious got an important message about proper behavior. In this case it was more about their own lack of preparation. There is a scene in *Stand And Deliver* in which the teacher, <NAME>, denies entry to the classroom to a student who doesn't have his homework paper ready to turn in: "If you don't have a ticket, you don't get to watch the show." But hopefully, the need for such drama is infrequent. Upvotes: 5 <issue_comment>username_2: > > Would he be able or be allowed to not teaching, let us handle it ourselves and only come to do the test? > > > Depends on your university's policies, but in general no. Instructors of record have wide latitude over how they manage their classrooms, but refusing to teach for weeks at a time would almost certainly be over the line at virtually every institution. > > Would he be able to fail the whole class? > > > Some professors have [tried](http://www.insidehighered.com/news/2015/04/27/professor-fails-his-entire-class-and-his-university-intervenes) [this](https://nextshark.com/flamingo-kuo-taiwanese-professor-fails-his-entire-class-after-one-negative-feedback-from-a-student/) and the university generally intervenes. Certainly failing an entire class because of a few bad actors is difficult to defend. If there is a reasonable final exam and not a single student manages to pass it, then failing the entire class could be an appropriate outcome. However, this would reflect poorly on the professor's teaching skills (and/or the university's admissions policies). > > Would he possibly react the same way if only 1 student use the phone without making any noise? > > > You'd have to ask him. Certainly this reaction would be more difficult to justify in such a case. Upvotes: 3 <issue_comment>username_3: It depends on the country and if the school is public or private. If the teacher is considered an employee and does that then he can be charged for abandoning work or breaking his contract of services with the school if the entire class goes to complain to the school administration, in which case the teacher could even be fired. BUT on the vast majority of situations... yes, he can. To answer the questions: 1-Would he be able or be allowed to not teaching, let us handle it ourselves and only come to do the test? R- Yes. However petty and childish that is he can and would. Of course he should be reported, but best the student can do is ask another teacher of the same subject/class to allow you as listener (just to enter class and hear the lecture). Modern times do require modern solutions too, so making a scandal in social media might help. On the other hand, it could be argued that he was teaching you all a lesson. 2-Would he be able to fail the whole class? R- Yes. He can, or fail someone he doesnt like. In which case the whole class or the affected ones can make an official petition or procedure, based on the class's plan deliverable homeworks, to be graded by an external source. That is a normal procedure in most universities. 3-Would he possibly react the same way if only 1 student use the phone without making any noise? R- That is hard to answer without knowing the person, but petty behavior does come from petty people, so yes, he could, or he might try to use that student as an example for everyone. Having said that, you should consider that its basic respect and manners to not be messing with your phone when someone is speaking to you. More so, it's a class where you are supposed to be learning, so you'd be wasting the chance. If he did stated at the start of the course that he would not allow using phones, then the class is at fault. However, to take into account is that there are many teacher, the older ones specially, that do not understand just how integrated is modern learning and technology. You could be taking notes on a tablet/notbook/smart-paperbook, check what he says and references on the phone, sending the photos of the whiteboard to the class's chat group, using the cloud to share the materials, etc . If that was the case, the whole group needs to speak to the teacher about it and get clear what is expected or allowed. Upvotes: -1 <issue_comment>username_4: This could be a defensible action if *everyone* in the classroom were on their phones. As other answers have pointed out, professors tend to be given rather wide latitude by their institutions (especially if they're tenured), and there may arguably be some value in "shock-and-awe" style approaches like this one when used sparingly. However, the sticking point here is this sentence, which I've modified for emphasis: > > My professor left the classroom before class ended because, like he said, **too many students** were using phones while he was talking. > > > "Too many" does not mean all. Therefore, what the professor did was unfair and inappropriate. If there's *one* student sitting there in the classroom who honestly wants to learn and is doing everything they can to pay attention, then the professor is obligated to perform their job out of respect for and contractual obligation to that student. It is not the fault of the students who are *not* using cell phones that "too many" of their peers are using cell phones. There isn't anything you can reasonably expect the non-cell-phone-using students to do in order to get their cell-phone-using peers to stop—and certainly not during the class. Therefore, it is unreasonable to punish the non-cell-phone-using students for their cell-phone-using peers' actions. Aside from that and whether or not it is permissible, I'd judge this as a major over-reaction, assuming that what you said about the lack of disruptions is true. At the university level, students are mature enough to be responsible for their own education and decisions. Instructors can and should support them, encourage them, and possibly even *cajole* them into making the correct decisions. But the students are still ultimately responsible for making and paying the price for their own decisions. Therefore, if a student wants to sit in a lecture without paying attention, that's really their prerogative. It only becomes something that the instructor needs to address if their choosing-not-to-pay-attention becomes a *distraction* for other students who are trying to pay attention. As, for example, would happen if phones were ringing or otherwise making noise. And at that point, the "dramatic gesture" would be to kick out those students who were creating a distraction, then continue the lecture for the benefit of those students who wanted to hear it. Upvotes: 1 <issue_comment>username_5: > > Would he be able or be allowed to not teach, and let us handle it ourselves and only come in to do the test? > > > It wouldn't come to that. He "counter-provoked" you to realize you (= the students taking the class) were crazy for not making sure that class can be given without interruption, and that it would lead to ruin. Probably and hopefully you will take the hint, individually and collectively, and next time there will be no or almost-no students busy with their phones. > > Would he possibly react the same way if only 1 student used their phone without making any noise? > > > No. He would either ignore it or call out that student and make him/her put their phone away. Upvotes: 1
2019/10/18
591
2,507
<issue_start>username_0: I am working in the Computer Science field and when working on one internal review of our department, I am asked to fill in the following items: > > List of important committees of which you were an executive member (these committees typically own A-journals or A-conferences) > > > e.g., ACM SIGSOFT, ... > > > May I ask what are `executive members`? Surely I have served as PC of a number of Rank-A conferences in our field, but to my best understanding, there are no such `executive members` in anywhere. For instance: <https://sosp19.rcs.uwaterloo.ca/organizers.html> Could anyone shed some lights on this? Thank you very much.<issue_comment>username_1: I think that the term "executive" here is supposed to mean "you actually had to do something". This would exclude honorary members, or in some cases ex-officio members, who are on the committee for other reasons but not generally participate in the actual work beyond, possibly, being on committee phone calls. Upvotes: 1 <issue_comment>username_2: These conferences typically have an Executive Committee, consisting of the Chairs of the various threads in the conference. Some typical examples are the Conference Chair, Program Chair, Tutorials Chair, Posters Chair, Educator's Symposium Chair, etc. It will normally be around ten people and they are responsible for the overall running of the conference with each having primary responsibility for one aspect (other than the conference chair who has overall responsibility). This Executive Committee will meet several times in preparation for the conference, usually face to face and usually at the Conference Venue at least twice. The meetings are where problems are addressed and, hopefully, solved. An "area chair" may also have a co-chair to aid with their area. They might attend meetings or not, but are ready to serve if there is some issue with the chair. There may also be a few others with somewhat lesser responsibilities. They are arguably part of the Executive Committee. Doctoral Symposium Chair might fall into this category. The Program Chair is a special case and there is usually a Program Committee with several members. They may meet separately from the Executive Committee and are responsible for the overall program. They see that the right papers are accepted and that everything in the main program fits together by assigning papers to sessions. Again, they are arguably part of the Executive Committee. Upvotes: 3 [selected_answer]
2019/10/18
658
2,571
<issue_start>username_0: If a psychological methods journal only specifies that the entire manuscript has to be in APA style, how should I format computer code snippets in the manuscript to be submitted? I see nothing mentioned in the actual APA manual regarding this. Is there any convention? In specific I wonder whether or not I should include syntax highlighting. (The code currently in question is R, but it could also be, say, Python or JavaScript.) --- **EDIT:** The original question referred to the 6th edition of the APA manual. As *HumberSean* answered, computer code style is now [specified in the 7th edition](https://apastyle.apa.org/style-grammar-guidelines/paper-format/font).<issue_comment>username_1: I’m not certain if there’s an official method of formatting code that’s included in the paper itself, but there’s a simple workaround: Put your code on a website like GitHub, then cite that website the same way you’d cite any other website in the APA style whenever you refer to the elements of the code you’ve posted on that site. Upvotes: 1 <issue_comment>username_2: Did you find a working solution? I am in the same position; most of my paper is about code and the output it generates. Psychological methods (an APA journal) has some instructions about this [here](https://www.apa.org/pubs/journals/met/?tab=4): "If you would like to include code in the text of your published manuscript, please submit a separate file with your code exactly as you want it to appear, using Courier New font with a type size of 8 points. We will make an image of each segment of code in your article that exceeds 40 characters in length. (Shorter snippets of code that appear in text will be typeset in Courier New and run in with the rest of the text.) If an appendix contains a mix of code and explanatory text, please submit a file that contains the entire appendix, with the code keyed in 8-point Courier New." Upvotes: 2 <issue_comment>username_3: APA 7 provides for [computer code](https://apastyle.apa.org/style-grammar-guidelines/paper-format/font): "To present computer code, use a monospace font such as 10-point Lucida Console or 10-point Courier New." For my APA 6 thesis, I used italics for my coding examples following APA 6 4.20 (a letter, word, or phrase cited as a linguistic example), taking the position that JavaScript is a language. Upvotes: 4 [selected_answer]<issue_comment>username_2: Another option is to put code in a figure with monospaced font and with syntax highlighting. I ended up using Consolas 8pt in my figures. Upvotes: 0
2019/10/18
532
2,340
<issue_start>username_0: I have worked for 3years as phd researcher at universityin country A. Because of some critical family relocation issues, i had to move to my family in country B. My maximum research work is done but due to travel restrictions in country B, i can not go back to country A for longer period of time . What are thr possible options available?<issue_comment>username_1: That needs to be worked out with your advisor and the university. The worst outcome is that you have to abandon. But working remotely may be possible in many fields. You would need to establish a regular communication regimen with your professor, of course, and possibly also find local resources (Library, computing...) to enable you to continue. But with the research largely done it may be a simpler matter than you think. Organizing a defense of the dissertation will be a challenge, but, with some expense, it might be arranged remotely. The university likely has AV facilities on its end to enable it. It might not be easy, and it might cost you a bit of time, but, subject to restrictions imposed by the field (lab work, ...) it should be possible. But it will put some burden on your professor and the committee that they will need to agree to. Upvotes: 2 <issue_comment>username_2: Yes indeed, as username_1 says. Talk to your professor. And be as kind and polite and cooperative as you can. Your prof is motivated to help you finish your degree. It looks bad for your prof if you don't finish. Most universities are at least a little understanding about family problems. Your prof may advise that you ask your department or your school faculty for some assistance and consideration. Most especially, they will probably be open to letting you do some of the requirements through things like email and video linkup and such. And they may allow you some extra time for your degree. Especially since you have already done 3 years. It probably means your residency requirement is complete. I went through a lesser version of this in my PhD. My prof got a tenure track position at a different university in a different province. By that time all I needed was to finish my thesis. So I did that while physically located at the other university, but staying a student where I started. The university was somewhat understanding. Upvotes: 2