text
stringlengths
83
79.5k
H: Can a solenoid have its magnetic field "focused"? Imagine a solenoid generating a magnetic field. Can this field be focused so that it exists only in a certain section of that solenoid at a given time. If a typical solenoid looked like: ////////////// ... and then had current applied and generated a field around it: ~~~~~~~~~~~~~~ ////////////// I would like to adjust, say, a dial all the way to left to generate a field like: ~~ ////////////// ... and then dial it to the right: ~~ ////////////// ... or anywhere in between: ~~ ////////////// Is this possible with a single solenoid, or would I have to use a number of different solenoids each given current independently? AI: In short NO. A solenoid would produce a field similar to what you show in the second "picture". The only difference you could make would be to increase or decrease the amplitude of that field (or make it negative). Although this field is for the most part inside the solenoid (not the outside). The simplest way to do something similar to what you're after would be to split the coil into two. Then alter their amplitudes so that you get maximums or null fields where ever you want with relation to the center of both of those coils by simply changing the magnitude and direction of either coil. Using 2 PWMs with simple filters and 2 solenoids you could accomplish pretty much what you're after. It would look something more like this: //////////// //////////////// < ~~~ > //////////// //////////////// Where the field can be "moved" anywhere between the two greater/less than signs.
H: What does 25°C mean on a transistor data sheet? The background is, I'm trying to understand how well a typical transistor obeys the Ebers-Moll equation at constant (junction) temperature. See this snapshot taken from the Fairchild datasheet of the 2N3904 transistor. My question is what is meant by 25°C? Is it ambient (room) temperature? The temperature of the epoxy casing? The actual temperature of the silicon (junction) itself? Also, how is constant temperature maintained? Will a manufacturer publish information on how it took its measurements for datasheets? In general, there is a temperature gradient from ambient to casing to junction. So the junction is hotter than the casing which is hotter than ambient. The temperature in the Ebers-Moll model refers to junction temperature. My own speculations: Of course if the answer is 3., we expect all of the lines to be straight since Ebers-Moll gives $$I_C = I_S \exp(V_{BE}/V_T)$$ where \$I_S\$ and \$V_T\$ are constant at constant temperature and \$V_{CE}\$. Note the logarithmic scale for \$I_C\$. So there is definite deviation at 125°C near \$I_C= 100 \text{mA}\$. Is this deviation really due to increased junction temperature due to heat dissipated by \$I_C\$, or is there genuine deviation from the model? I am thinking that it can't be 1. because the junction would get hotter with increasing \$I_C\$ and the line would deviate strongly from straight. For the 2N3904 with \$V_{CE}=5V\$, I calculated that junction temperature increases with \$I_C\$ as 1°C/mA at constant ambient temperature. (\$I_S\$ and so \$I_C\$ increase by about 9% per °C.) Perhaps they can achieve 3. by taking all the measurements automatically in a fraction of a second, before giving it a chance to heat up. Any ideas? AI: It's the junction temperature \$T_J\$, and yes, they would typically take such a measurement in microseconds before the junction temperature changes. The upward slope of at high base and emitter currents is due to resistive (non-ideal) behavior that is temperature-sensitive, or due to the reduction in beta at high current (so the base current is higher for a given collector current than if the beta was constant). Note that the beta reduction is temperature-sensitive.
H: Connect a 4-pin connector to power supply I currently have a fan with a 4-pin connector that I want to power, but I do not have a power supply with a 4 or 8-pin connection. Is it possible to disconnect this connector piece and connect the individual wires directly? (red and black?) Update: I gave it a shot and removed the connector piece and hooked up the red/black wires to a power supply. This worked. AI: Most fans with a separate PWM-speed-control lead are designed to run at full speed if that lead is left floating. This is a safety measure so that, if the wire comes loose, the system won't overheat due to the fan stopping. In general, all fans designed for use in PCs use the same wiring. Starting from the black wire, they are Ground/return, +power (almost always 12V but some non-PC fans use other voltages; check the label on the fan), tachometer (wire is grounded by the fan a certain number of times per revolution, usually 2 or 4) and PWM. The PWM wire is always on the end to allow a 4-wire fan to be plugged into a 3-pin connector for either no speed control or PWM control by interrupting the +power pin. BTW, the link provided in the comment by M. Y., http://pcbheaven.com/wikipages/How_PC_Fans_Work/, with all of its edits, confirms all of this.
H: Name this short-range optical component It's time for another rousing round of "Name That Component"! Today's participant hails from a Bluetooth combo keyboard/mouse, where it detects finger motion across it which the Bluetooth chip translates into relative motion HID messages. It (supposedly) uses an imager of some sort to detect motion of an object that contacts the surface. Does this little beast have a proper name? (The other side only has a metal key dome under a white sticker which when depressed the Bluetooth chip converts to a middle-click HID message.) AI: That, my friend, is an Optical Finger Navigation Sensor. Specifically, it looks like something out of the Avago portfolio that's been sold off to Pixart Imaging; they currently sell the ADBM-A350 (Datasheet) that I would look at for more information. This also comes in a sensor-only package, the ADBS-A350, that you could use if you wanted to do your own optical design. Essentially, it works the same way an optical mouse sensor works -- a low-resolution image sensor tracks pixel intensities to determine displacement.
H: Response of a stable second-order transfer function to a unit sine wave input In the lecture slides from my university it says that: "The response of a stable second-order transfer function to a unit sine wave input is:" $$Y(s)=\frac{1}{s^2+2\zeta\omega_n+\omega_n^2}*\frac{\omega}{s^2+\omega^2}$$ Isn't this missing an \$ω_n^2\$ in the numerator? Since the standard form of a second order transfer function is: $$H(s) = \frac{\omega_n^2}{s^2+2 \zeta \omega_n s + \omega_n^2}$$ and the laplace transform of the sine wave input is: $$\frac{\omega}{s^2+\omega^2}$$ Is this a mistake in the lecture slides? AI: You are probably right, but it depends on how you define the transfer function. The sine part is right, while as you can see your \$H(s)\$ is not adimensional, it's something like \$s^2\$, that is pretty strange for a transfer function[^seconds]. You are safe assuming that's a slide mistake. For the future keep in mind that checking the measurement units is always a great idea. [^seconds]: the s is for seconds, not for the s variable.
H: Tuning the radio receiver I am trying to understand how exactly to tune the Superhet receiver. In my simulations, I have attached to the initial frequency mixer---> A local oscillator and an AC signal source to model the incoming frequency. Now my question is how will this wanted frequency reach the mixer from the antenna ( Since antenna is exposed to all frequencies ) ? Do i need to add an additional tuning circuit or will this be sufficient ? If an additional tuning circuit is needed, how will I co-ordinate the oscillators attached to them ( since both of them will require different frequency). I believe gang capacitors come into this play ( not sure) but I have only this piece available ( nothing else) : http://www.petervis.com/electronics/tuning-capacitor/tuning-capacitor.html AI: What you have drawn is a demodulator (it can also be used as a modulator and therefore acts as a mixer). Before the demodulator is usually an intermediate frequency (I.F.) circuit that is tuned to a fixed frequency and has several stages of band pass filtering in order to reject out-of-band (unwanted) signals. Before the IF strip is usually the mixer - this mixes the broad array of signals from your antenna (loosely band-limited) to the intermediate frequency for filtering by the IF strip. Tuning is done mainly on the IF strip and setting it up is usually an iterative process. BTW - using 1N4007 diodes will not yield great results because of their slow response - try a BAS16 or 1N4148 - they are much quicker and if you want to run your mixer at beyond 1 GHz, there are others you can choose.
H: Transistor pinout - how to read the datasheet Is the capsule's straight face visible? Or the round one? Is that top view or bottom view? Do all datasheets use the same orientation? This is the datasheet. I'm not looking for an answer specific to this datasheet. I want to know how to identify pins when there's a representation like this shown. Some datasheets use 3D images with pin labels, but not this one. AI: There are two common projection methods for technical drawings. First-angle projection First-angle projection is as if the object were sitting on the paper and, from the "face" (front) view, it is rolled to the right to show the left side or rolled up to show its bottom. Third-angle projection Third-angle is as if the object were a box to be unfolded. If we unfold the box so that the front view is in the center of the two arms, then the top view is above it, the bottom view is below it, the left view is to the left, and the right view is to the right. This is actually the projection method used in the example data sheet. I marked the two planes that were used. So what about the question "how do I know what method was used"? On engineering drawings, the projection angle actually "should be" denoted by an international symbol consisting of a truncated cone, respectively for first-angle and third-angle (standard in the USA, Japan , Canada, and Australia). As this "indicator" is often not present, the other option you have is to actually look at the planes and evaluate the view by looking at the properties of the object as the answer from Vladimir Cravero already pointed out. I would assume that the indication is often omitted because/when the projection type can already be identified unambiguously this way. Source and additional information.
H: Creating printed circuit boards (PCBs) which aren't flat I would like to create a printed circuit board (PCB) which is not flat and which is not shaped into a square, but don't know how to proceed. Can Cadsoft Eagle create such a board? If so, how? What other software packages can be used to create boards such as this? Where can I send such a board for manufacture? Furthermore, I want to create a piece of plastic which has a three-dimensional shape and onto which the printed circuit board will fit. What's a good way to design both the board and the plastic such that it's easy to ensure they both line up? AI: You can create a non-square PCB simply by defining whatever shape you like for the board outline. As for a non-flat one, PCBs just aren't made that way. There are some alternatives that you could consider: Flexible kapton circuits. These the orange ribbons you see connecting things like LCD screens. They often have a few small circuit mount components on them, but they're not suitable for through-hole components. Conductive thread. You could use fabric with conductive thread for your circuit. This is only going to be suitable for extremely simple circuits. Multiple PCBs. You could divide your circuit up into small flat pieces connected by wires. This is probably your best option. By using a number of flat tiles, you can approximate a curved surface.
H: Inform microcontroller that the analogue circuit is powered up Basically, I want to pull-down the microcontroller pin if some analogue circuit is powered up. The analogue circuit power is 9V, 1.6A. The idea I have is to use something like IRLZ24N MOSFET, connect gate to the analogue circuit via resistor, connect microcontroller pin to drain and source to the ground. When analogue circuit is powered, it will trigger MOSFET and pull down the microcontroller pin, configured as pulled-up input (microcontroller itself is powered via USB). The question is whether there is a better or more correct solution than I suggest above. AI: That's a decent way to do it, but I'd use a bipolar transistor such as a 2N3904 or 2N4401 (MMBT3904/MMBT4401 in SMT). Yes, you need one or two more resistors, but they're more immune to surges. The IRLZ24N can be damaged if you put more than 16V on the gate. It's also gross overkill for the job, and is not guaranteed not to leak enough to overcome a typical weak pullup in an MCU, especially at elevated temperature. If you insist on using a MOSFET, a 2N7000 with a resistor and zener on the gate is more appropriate (2N7002 in SMT). In many cases you could just use the 9V and pass it through a resistor + Schottky (BAT54) clamp to Vdd and read it directly (turn the pull-up off). The resistor could be relatively high value like 100K. simulate this circuit – Schematic created using CircuitLab This assumes you're only looking for a go/no-go indication of the presence of some voltage on the analog power supply. If you want to know when it is within spec, you'd need to use a comparator (maybe your MCU has one on-board- many PICs do). In such a case, and lets say the internal comparator reference is 1.024V, you could divide down the 9V so that it gives you 1.137V at 9V in, so that the comparator changes state at about 8.1V on the analog supply (when it is at nominal less 10%). Alternatively, you could use an on-board ADC channel and do the comparison digitally.
H: How can a CPU deliver more than one instruction per cycle? Wikipedia's Instructions per second page says that an i7 3630QM deliver ~110,000 MIPS at a frequency of 3.2 GHz; it would be (110/3.2 instructions) / 4 core = ~8.6 instructions per cycle per core?! How can a single core deliver more than one instruction per cycle? To my understanding a pipeline should only be able to deliver one result per clock. These are my thoughts: Internal frequency is actually higher than 3.2 GHz Some parts of the CPU are asynchronous in a way a humble human like myself cannot understand There are multiple concurrent pipelines per core A pipeline can deliver more than result per clock, an instruction can skip pipeline stages and there are multiple prefetcher to keep up I am missing something AI: First, as Keelan's comment and Turbo J's answer point out, the measurement was 113,093 Dhrystone MIPS not native MIPS. The Ivy Bridge microarchitecture of the i7 3630QM can only commit 4 fused µops per cycle, though it can begin execution of 6 µops per cycle. (The number of fused µops in a trace of code is roughly equal to the number of instructions; some complex instructions are decoded into multiple µops that are not fused and some pairs of instructions can be fused into a single µop, e.g., a compare immediately followed by a conditional jump.) Two of your speculations on how multiple instructions can be executed in a single cycle are quite valid and have been used in actual processors. Your first speculation, that a faster internal clock is used, was used in the original Pentium 4's "fireball" ALUs. These ALUs were clocked at twice the frequency of the rest of the core, which was already relatively high. (This was accomplished by using a staggered ALU in which the lower half of an addition was done in one cycle, allowing a dependent operation to use the lower half of the result in the next cycle. For operations like add, xor, or left shift which only need the lower half of operands to produce the full lower half of the result, such staggering—also known as width-pipelining—allows single cycle result latency as well as single cycle throughput.) A somewhat related technique, cascaded ALUs, was used by the HyperSPARC. The HyperSPARC fed the results from two ALUs into a third ALU. This allowed two independent and a third dependent operation to be executed in a single cycle. Your speculation that "there are multiple concurrent pipelines per core" is the other technique that has been used. This type of design is called superscalar and is by far the most common means of increasing the number of operations executed in a single cycle. There are also a few other odds and ends of instruction execution that might be worth noting. Some operations can be more efficiently performed outside of the ordinary execution units. The technique of move elimination exploits the use of register renaming in out-of-order processors to perform move operations during register renaming; the move simply copies the physical register number from one position in the renaming table (called a register alias table) to another. Not only does this effectively increase execution width but it also removes a dependency. This technique was used early with the stack-based x87, but is now broadly used in Intel's high performance x86 processors. (The use of destructive, two-operand instructions in x86 makes move elimination more helpful than it would be in a typical RISC.) A technique similar to move elimination is the handling of register zeroing instructions during renaming. By providing a register name that provides the zero value, a register clearing instruction (like xor or subtract with the both operands being the same register) can simply insert that name into the renaming table (RAT). Another technique used by some x86 processors reduces the cost of push and pop operations. Ordinarily an instruction using the stack pointer would have to wait a full cycle for a previous push or pop to update the value for the stack pointer. By recognizing that push and pop only add or subtract a small value to the stack pointer, one can compute the results of multiple additions/subtactions in parallel. The main delay for addition is carry propagation, but with small values the more significant bits of the base value—in this case the stack pointer—will only have at most one carry-in. This allows an optimization similar to that of a carry-select adder to be applied to multiple additions of small values. In addition, since the stack pointer is typically only updated by constants, these updates can be performed earlier in the pipeline separately from the main execution engine. It is also possible to merge instructions into a single, more complex operation. While the reverse process of splitting instructions into multiple, simpler operations is an old technique, merging instructions (which Intel terms macro-op fusion) can allow the implementation to support operations more complex than those exposed in the instruction set. On the theoretical side, other techniques have been proposed. Small constants other than zero could be supported in the RAT and some simple operations that use or reliably produce such small values could be handled early. ("Physical Register Inlining", Mikko H. Lipasti et al., 2004, suggested using the RAT as a means of reducing register count, but the idea could be extended to support loading small immediates and simple operations on small numbers.) For trace caches (which store sequences of instructions under particular assumptions of control flow), there can be opportunities to merge operations separated by branches and remove operations that produce unused results in the trace. The caching of the optimizations in a trace cache can also encourage performing optimizations such as instruction merging which might not be worthwhile if they had to be done each time the instruction stream was fetched. Value prediction can be used to increase the number of operations that can be executed in parallel by removing dependencies. A stride-based value predictor is similar to the pop/push optimization of a specialized stack engine mentioned earlier. It can compute multiple additions mostly in parallel, removing the serialization. The general idea of value prediction is that with a predicted value, dependent operations can proceed without delay. (Branch direction and target prediction is effectively just a very limited form of value prediction, allowing the fetching of following instructions which are dependent on the "value" of the branch—taken or not—and the next instruction address, another value.)
H: How exactly to track oscillator in superhet receiver The IF is 490 kHz. I have learnt that the local oscillator and the tuning block before the IF stage must "be aware of each other" or track. So in my case, the tuning of both the circuits should differ by 490 kHz. See this article: Radio Receivers - Constant Frequency Separation Specifically, it is mentioned that: Since we have to tune the RF amplifier section throughout the entire broadcast band, the frequency of the local oscillator must also vary in a manner that it always maintains a gap of 455 kHz. To achieve this condition, the Local Oscillator and RF Amplifier section are 'ganged', i.e. their tuning condensers are connected/ganged mechanically in such a way that when we tune the variable capacitor in the RF section, the variable capacitor in the local oscillator also changes its value, it 'tracks' the frequency to which the 'Aerial Circuit' is tuned and remain separated from the tuned frequency by 455 kHz up. I want to know how exactly is this tracking done. As far as I understand, the variable capacitor is made common to both the oscillator as well as the front end tuner. How will this lead to a difference of 455 kHz (or 490 kHz in my case) in the tuning of the tank as I vary the frequency of the oscillator? AI: The problem with using identical capacitor sections is that the LO has a proportionally narrower tuning range than the front end, so tracking will be poor. It can be improved by adding a padding capacitor in series with the LO tuning capacitor. In a broadcast AM radio the value of the padding capacitor is usually is a little larger than the tuning capacitor's maximum value. To fine tune the tracking a trimmer capacitor can be added in parallel with the padding capacitor. Some tuning mechanisms have this trimmer built in, along with smaller trimmers in parallel with the tuning caps to adjust tracking at the high end.
H: How is EMI avoided inside of an IC? I was re-watching this video about how CPUs are made and at the time of the video the link points to, I thought "Isn't there some EMI problems with all those transistor signals flying all over the place?" Possibly some small amount of radiated interference or something. And that is my question, is EMI a consideration when developing an IC and if so, how is it solved? AI: Everything is really tiny compared to tracks on a PCB so loop areas and potential antenna lengths are probably 10,000 and 100 times smaller respectively for a broad-brush and sweeping estimation. Consider an unintentional PCB loop antenna that could spuriously transmit (or receive) interference - how much smaller will this be on a chip - dimensions of area might be 10,000 times smaller hence a distance of 100:1 doesn't seem unreasonable and it's probably a lot less - consider how far tracks are from the substrate (relative ground) of a chip - 100 millionths of an inch or maybe a bit more? PCB tracks above an earth plane are a much, much wider gaps. I'm not going to give you actual numbers because it will vary between one device and the next but just consider that to make EMI or receive it you need something like an antenna - think how small and ineffective that will be on your average chip. Having said all of that, chips aren't exempt from EMC but it takes a lot more energy to get a chip to roll-over - this energy will be likely more than enough to create a foul-smelling signal on a PCB track that might connect to the chip. How could you ever get a PCB track that could be as resilient as "the chip" - it's always going to be the PCB that causes problems not the chip.
H: Definition of Full-scale error I was looking for the concept of "Full-scale error" (related with laboratory measurement terminology) on the web without success. I would appreciate your help. AI: An error expressed as a percentage of full scale means that any measurement made will fall between the limits given by that quantity. For example, an analog voltmeter with a range of zero to 100 volts specified as having an accuracy of +/-1% of full scale can be off by plus or minus 1 volt when there's between zero and 100 volts across the meter. So, with the meter reading 1 volt, the actual input could be anything between zero and two volts, and with the meter reading 100 volts the actual input could be anywhere between 99 and 101 volts.
H: STM32F4xx input pin reads wrong state on first run but not subsequent I am working on a program which reads the state of a pin at startup to determine an operational mode. When I run the code on first power up, it reads incorrectly, but subsequent resets without power cycle read correct. I can't seem to find an answer for why this might be and I'm a bit at a loss myself. I figure there is some sort of timing issue that is happening and that is why subsequent reads are fine, but I'm not sure how to modify my code to deal with it. The circuit is PI6 connected to either 3.3V or GND, depending on the desired configuration. Here is the relevant code: #define DEVICE_HOST_MODE_PIN GPIO_PIN_6 #define DEVICE_HOST_MODE_PORT GPIOI #define DEVICE_HOST_MODE_CLK_ENABLE() __GPIOI_CLK_ENABLE() #define DEVICE_HOST_MODE_CLK_DISABLE() __GPIOI_CLK_DISABLE() int main(void) { /* MCU Configuration----------------------------------------------------------*/ /* Reset of all peripherals, Initializes the Flash interface and the Systick. */ HAL_Init(); /* Configure the system clock */ SystemClock_Config(); /* System interrupt init*/ /* Sets the priority grouping field */ HAL_NVIC_SetPriorityGrouping(NVIC_PRIORITYGROUP_2); HAL_NVIC_SetPriority(SysTick_IRQn, 0, 0); BSP_InitializeMode(); /* Remaining application code */ } void BSP_InitializeMode(void) { /*Configure MODE signal line */ GPIO_InitTypeDef GPIO_InitStruct; DEVICE_HOST_MODE_CLK_ENABLE(); GPIO_InitStruct.Pin = DEVICE_HOST_MODE_PIN; GPIO_InitStruct.Mode = GPIO_MODE_INPUT; GPIO_InitStruct.Pull = GPIO_PULLDOWN; GPIO_InitStruct.Speed = GPIO_SPEED_FAST; HAL_GPIO_Init(DEVICE_HOST_MODE_PORT, &GPIO_InitStruct); GPIO_PinState state = HAL_GPIO_ReadPin(HOST_READY_PORT, DEVICE_HOST_MODE_PIN); isDevice = (state == GPIO_PIN_RESET); } The SystemClock_Config() method is as generated by STM cube. I am currently compiling with -O0 and -g3 flags. EDIT 1 Some additional information I have discovered: It seems more specifically that I can't read that a pin is pulled high (3.3V). I tried adding another pin and configuring both as pull-up, then grounding the one I want, and both simply read grounded. EDIT 2 As requested, the definition of isDevice is posted below. This is in a header file mode.h #ifndef MODE_H_ #define MODE_H_ #include <stdbool.h> extern bool isDevice; #endif /* MODE_H_ */ And the actual implementation is in the same file as the original code, at the top: bool isDevice = true; Additionally, I discovered a typo. Fixing it does not seem to have fixed the root problem though. I was reading the wrong GPIO pin number. It has been corrected above. Again, changing this to the correct pin number has not resolved the issue, and it still presents with two pins. I have tried setting them for no pull, pull up and pull down, and in no circumstances does it seem to change the result. They always read as grounded, no matter what signal is applied to them. AI: So it turns out this was a simple copy paste error on my part which, due to similarly named macros in adjacent functions, my eyes glassed over. I was initializing one port, while reading another. My function above should have read: void BSP_InitializeMode(void) { /*Configure MODE signal line */ GPIO_InitTypeDef GPIO_InitStruct; DEVICE_HOST_MODE_CLK_ENABLE(); GPIO_InitStruct.Pin = DEVICE_HOST_MODE_PIN; GPIO_InitStruct.Mode = GPIO_MODE_INPUT; GPIO_InitStruct.Pull = GPIO_PULLDOWN; GPIO_InitStruct.Speed = GPIO_SPEED_FAST; HAL_GPIO_Init(DEVICE_HOST_MODE_PORT, &GPIO_InitStruct); GPIO_PinState state = HAL_GPIO_ReadPin(DEVICE_HOST_MODE_PORT, DEVICE_HOST_MODE_PIN); isDevice = (state == GPIO_PIN_RESET); }
H: When routing power how is a safe current-carrying wire configuration determined with multiple wires? I've been looking at a device/power-supply pair I was sent recently for a project. The device specs say it needs 24V and will draw 10A max. The power supply provides 24V @ 9A (14A peak). The power supply has 3 posts for V+ and 3 posts for GND (call them V0, V1, V2, G0, G1, and G2 respectively). The cable that was supplied bundles 14 24 AWG wires, connected into a molex receptacle such that 2 wires go to V0, 2 to V1, 3 to V2, 2 to G0, 2 to G1, and 3 to G2. I looked at a load-limit table for 24 AWG wire and it says that for chassis wiring, the wire is rated for 3.5A max and for power transmission 0.577A max. Since the device can draw up to 10A, clearly a single piece of 24 AWG would be too small to be safe in this configuration. The only conclusion I can draw from this is that using 7 wires across 3 terminals spreads the current load out such that 24 AWG is safe. My questions, if you've managed to get this far are: 1) What is the calculation to determine the correct minimum number of smaller gauge wires to use in such a setup (i.e. where you're replacing a single larger wire)? Is it simply max current draw / number of wires? 2) How is the voltage drop due to the wire length calculated when using multiple wires (I know how to do it with one wire)? Thank you! AI: Determine the cross-sectional area of the wire being replaced, and then divide that by the cross-sectional area of the smaller wire in order to determine how many smaller wires you'll need to equal the cross-sectional area of the larger wire. If the quotient isn't an integer, ALWAYS round up to the next largest integer. Determine the resistance of one wire as you normally would, and then the resistance of the cable will be that of the resistances of the separate conductors all connected in parallel. The voltage drop across the cable will then be the product of the current through the cable and its resistance.
H: Difference effect filter in before and after amplifier Is there any difference between low pass filter that put before amplifier and after amplifier? Which one is good for PC audio amplifier? AI: When you amplify something, you are increasing the power. If you look at the cost of components, it will cost you less to filter a "pre-amped" signal. Also, why would you want to amplify the "dirty" parts of a signal? Clean it up, then amplify.
H: Why the neutral of the UPS or inverter or generator is advised to be earthed/grounded? In my company we had few earthing related issues. When an consultant was brought in and apart from solving the issue, a suggestion was made to ground the neutral of the UPS and diesel generators that we have. The situation prevented me to seek clarification from the consultant, but I could not help myself pondering why? I Googled and it seems few other results also suggest the same but the pros and cons are not listed anywhere. I thought this would be the right place to ask and hence this question. AI: I am not sure about UPSs, but a diesel generator is built to generate a voltage between its two (four) output terminals. Whether these terminals voltage with respect to earth is high or low is not defined, and can't be. The generator per se is not connected to earth, so its neutral terminal might be some volts above (or below) earth as well as some (tens of) tens of volts above it. This might lead to tons of problems especially when you connect a correctly earthed apparatus to a generator powered one: their grounds are not at the same voltage so a spark might occur on connection, much like static on windy days. This should not be harmful for people but unfortunately electronics is quite more delicate and can suffer static discharges a lot. Tying neutral to earth prevents this problem, and eliminates the risk for operators too. Please note that this might not occur always since the generator chassis can be connected to the neutral, and the generator is usually placed on ground so a neutral-earth connection usually already exists. That's a poor connection, but sometimes is enough to prevent problems. I am guessing that some UPSs may have the output fully insulated from the AC line input, if that's the case tying the neutral to ground is a good idea. If the output is not insulated I'd say that tying the neutral to ground is a bad idea since you are basically connecting the AC neutral line to ground, and that's something that happens in distribution cabins but should not happen in the end user wiring.
H: mpasm - overlapping variables in a udata directive In MPASM it appears I can do this to make paylen_lo and bytesleft have the same memory location: UDATA paylen_hi RES 1 bytesleft RES 0 paylen_lo RES 1 This uses the same type of trick as used with CBLOCK where 0 bytes is reserved for a location, thus causing two labels to be at the same location. MPASM warn with "Warning[231] No memory has been reserved by this instruction" but the map file shows the location being shared correctly. My problem is - this does not seem to be documented and thus makes me worry it's a hack. The alternatives I've tried are: bytesleft EQU paylen_lo #define bytesleft paylen_lo The first does not work and the second is fine. Do the more experienced PIC developers stick with defines? The only advantage of the hack above is that the symbol is created. AI: I have used the first method a few times, although two names for the same byte is something to avoid unless there really is some advantage. Unfortunately, MPASM throws warnings for some perfectly normal and useful things. You can selectively disable nuisance warnings by the warning number. I guess I must have done that long ago in my setup code since I don't get that warning. I don't like using #define because, just as in C, it is only a string substitution. The assembler replaces one string with another and knows nothing more about it than that. The first method is better since the assembler understands that the symbol is the address of a variable. Another issue with your code is that you are using a unusual byte order. On a 8 bit machine, the byte order of muli-byte variables is up to you. However, there are a few cases where native data values are larger than 8 bits, and the PIC hardware assumes they are stored in least to most significant byte order. This is therefore also how libraries are written. You should use low byte first order unless there is a specific and special reason not to. For example, I have stored IP addresses in high to low byte order because that is how they are sent and received in the IP protocol. But in all such cases there must be a comment flagging the fact that the byte order is opposite from the standard expected order. I also think naming each byte of a multi-byte variable adds more confusion due to clutter than the extra names might help with. For example, I define multi-byte variables like this: myvar res 4 ;32 bit integer In that case, I make a point of always referecing individual bytes by using the variable name plus the explicit byte offset, even when that byte offset is 0. For example: ; Increment MYVAR by one. ; banksel myvar incf myvar+0 ;increment the low byte movlw 0 addwfc myvar+1 ;propagate carry to higher bytes addwfc myvar+2 addwfc myvar+3 The "+n" syntax alerts readers to the fact that this is a multi-byte variable, and then which byte is being accessed.
H: Reducing Ripple in the Zener Regulator In the Art of Electronics (Horowitz & Hill) the following circuit is given as a solution to reduce ripple current caused be variations in input voltage (pg. \$ 69 \$): "An alternative method uses a low-pass filter in the zener bias circuit (Fig. 2.13). \$R\$ is chosen to provide sufficient zener current. Then \$C\$ is chosen large enough so that \$RC >> \dfrac{1}{f_{ripple}}\$. (In a variation of this circuit, the upper resistor is replaced by a diode.)" I don't understand the need for the upper resistor R. In order to minimize variations in input voltage, couldn't we just connect the capacitor directly from Vin to ground? Why do we need to make an LPF with R instead? Since it is mentioned that we could use a diode instead, is there really a point? AI: note: This answer also addresses some issues mentioned in the comments of the question, have a look there too. I'll redraw the schematic for you: simulate this circuit – Schematic created using CircuitLab Note that I've added \$R_S\$ ('s' as in source). Let's analyze the circuit starting from the output. Q1 is in common collector configuration, meaning that its voltage gain is near unity while current gain is much higer, given that it's in active region. The voltage across \$R_{load}\$ is approximately the base voltage diminished by a \$V_{BE}\approx0.7\text{V}\$. Assuming that DZ1 is working properly the base voltage is set by it. The rightmost R must provide enough current for the diode and for the BJT though. The diode and transistor bias current is drawn from the LP filter output. The LP filter is a classic RC first order filter, note that its output sees a load approximately equal to R (DZ1 is a short circuit for small signals). The LP filter is powered by the PSU, that also feeds the bjt collector through \$R_C\$. Now to your questions: Why is the leftmost R needed? That's because \$R_s\$ is very small. Using only a capacitor would result in an LP filter but its corner frequency would be too high. A source output resistance is not something you want to rely on anyway, that's probably not well characterized. Can we use a diode instead of the leftmost R? If yes, why? You can use a diode instead of R effectively building a peak detector. What is better? Diode or resistor? Honestly, I am not sure. I guess the answer lies mainly in the PSU spec: keep in mind that a diode there would not limit the current going into the capacitor, that might be a problem. A diode would be slower to follow descending peaks of the PSU but that should not actually be an issue. Why voltage drop across \$R_c\$ should be less than voltage drop across rightmost R? That's to keep the transistor in active region. If the collector voltage goes too low (high collector currents -> high drop across \$R_C\$) the transistor may saturate and stop working properly. I think the circuit can work without \$R_C\$.
H: Why RTS/CTS do not solve completely the hidden terminal problem and exposed terminal problem? I was doing some reading online and textbooks about the RTS/CTS protocol for wireless transmission. Although, the textbook that I read said that the hidden terminal (and exposed terminal) is solved by RTS/CTS, online I saw that actually RTS/CTS protocol does not solve completely the problem. But only reduces is. The protocol seems pretty neat and to me it looks as if the problem is solved. What am I missing here, why multiple sources claim that the problem is not solved?! Are there hard assumption underneath the whole protocol? AI: First of all, it should be noted that RTS/CTS referred to in the question have nothing to do with hardware wires such as the RTS/CTS wires frequently used for UART handshaking. That having been said, this is a low-level radio-communication problem and would appear to be about as topical here as would be questions about things like XBee. The "hidden/exposed" terminal problem relates the fact that it's possible to have three nodes, X, Y, and Z, such that Y is close enough to X to receive data when Z is silent, and Z is simultaneously close enough to Y to jam communications from X, but far enough away from X that X cannot detect it. In the general case, Z will have no way of knowing when someone might be sending information to Y, and thus Z will have no way of knowing when his transmissions would jam those of someone else. If X is sending a small packet to Y, it's cheaper for X to simply send it and be prepared to retransmit if it gets jammed, than it would be to try to prevent such jamming. If X is sending a larger packet, however, it's useful to have X start with a small packet that tells Y to expect a larger one, and have Y respond with a small packet that indicates that it's expecting to receive the larger packet. This has a few useful effects. Chief among them: If Y cannot hear a small packet from X, there's no reason for X to send a big packet. Having X refrain from sending a big packet helps avoid cluttering up the airwaves with useless data that would possibly block or collide with other transmissions. If Z hears Y say that it's expecting data from X, then Z will know that it should avoid transmitting on that particular channel during the time the transmission is expected to take place. The RTS/CTS protocol is not a panacea. It can make it so that collisions are unlikely on larger data blocks, but does nothing to solve the problem of collisions with smaller data blocks [fortunately, small-block collisions are generally less costly]. Further, even if both X and Y advertise the fact that X is going to send data to Y, and even if Z is close enough to X and Y to jam their transmissions, there's no guarantee that Z will actually hear the advertisement from X or Y. For example, Z might be communicating with node Q, which is out of range of X and Y, and Q might send data to Z at the same time as X and Y are sending their advertisements. In that case, because Z would not have heard X and Y saying they expected to talk to each other, Z would have no reason to hold off its transmissions on their behalf. Fundamentally, wireless communication is plagued with a certain asymmetry: if A hears something from B, it can know that B sent it, but if A doesn't hear anything that doesn't imply that that B didn't send anything. Conceptually, it should be possible for a receiver to distinguish between "silence" and "indecipherable noise", and figure that if A gets silence (as opposed to noise) then B can't have sent anything, but I am unaware of protocols that take advantage of such distinctions.
H: Which resistor to use for this LED I'm looking to buy resistors for a Raspberry Pi project, but am getting conflicting information on which resistance to use for my LEDs. I will have three LEDs. The voltage from the RPi GPIO pins is 3.3 V. At 20 mA current, the forward voltages are 1.85-2.5V, 2.2-2.5V, and 3.0-3.4V for the red, green, and blue LEDs, respectively. Using this website, I calculated all of the resistors to be around 15-82 ohms, which seems low to me, especially considering this Adafruit starter kit comes with 560 ohm resistors for the (near-equivalent) LEDs... Can someone please advise me or help explain the discrepancy? Also, these LEDs will not be in parallel or series; each will be attached to a different out pin on the RPi. AI: The 20mA rating of your LEDs is probably the recommended maximum current. Most LEDs will produce light at much lower currents - I find 5 mA is sufficient for indicator light use. Try the LEDs with different resistors to see what current is actually required to produce sufficient light for your application (more current = brighter light). Also, check the maximum current the Pi GPIO pins can handle - I have a feeling it is less than 20 mA, but can't check at the moment.
H: Reprogram / modify existing products containing microcontrollers Is it possible to reprogram a device containing a microcontroller if I know the name of the microcontroller and have the necessary equipment to program a clean microcontroller of the same type? To be more specific, I'm wondering if one could manipulate the software of a scientific calculator and what know-how you need to have for this. AI: It depends on the microcontroller of course, but in general from all commercial products you can expect that the microcontrollers are read-protected. This means that ... you cannot read the machine code (to prevent you from taking the code and changing a few bits, or using parts of the code in other products); ... you cannot reprogramming the chip, unless you completely erase everything (see PIC Write-Protect Behavior). So yes, it is theoretically possible to reprogram the chip, but you'll have to start from scratch. And again, the answer may be different for different chips. Also, it might be necessary to temporarily disconnect the chip from the circuit during programming as the surrounding circuitry might interfere with the programmer.
H: HC-06 Bluetooth module not responding to any AT commands I am trying to interface with a HC-06 module but I can't talk to the module over serial. My VCC is connected to 5V (also tried 3.3V) of my Arduino Uno, and GND is connected to ground of it (obviously). I've connected the RXD to pin 11 and TXD to pin 10 (also tried the opposite too). The LED on the module is blinking (waiting for connection) Here is my code: #define BT_RX 10 #define BT_TX 11 #define BAUD 9600 SoftwareSerial bt(BT_RX, BT_TX); in setup method: pinMode(BT_RX, INPUT); pinMode(BT_TX, OUTPUT); bt.begin(BAUD); Then, I'm trying to send the simplest AT command to the HC-06 (without newline character, I do know it's different from HC-05). If everything works, I should get an OK response. I've tried all the baud rates available, both 3.3V and 5V, and tried inverting the RX and TX pins, yet I've got a response from the module. However, the module itself works. When powered on, my Bluetooth devices see it as HC-06 and can connect to it with the default password 1234 (other passwords fail). When my devices connect to it, the blinking light turns to solid ON, meaning that the connection succeeded. My serial console (when I connect using putty or SerialTools etc) can accept input. The Bluetooth module itself does send and receive data. When I short the RXD and TXD pins, I immediately receive what I typed back in my terminal. But I couldn't find any way to talk to the device over Arduino. I've read many tutorials and datasheets of the device, but I just can't talk to it. I am new to electronics (but most of the times understand what's going on easily). What am I doing wrong? AI: I've pulled up the KEY pin. It didn't work. But after some more trials it worked, even though key wasn't pulled up anymore. Not sure if it was the high KEY pin that triggered it to work, or if it was something else. Note, for future reference, it was default at baud 9600, and I used 5V VCC from Arduino, not 3.3V.
H: Output resistance of a multistage BJT amplifier In the amplifier shown: To calculate the output resistance, this equation is used: I understand that the R5/(B+1) comes from resistance reflection, but why do they not take the resistance in the collector of Q7, which I expected to be in parallel with R5, into consideration? AI: Q7's output resistance will be many times that of R5 and therefore the equation is an "OK" approximation but, sure, if you are able to calculate the output resistance of Q7 then put it in parallel with R5 to get a more accurate formula.
H: Heating coil over-drawing current I am currently designing a circuit that includes a nichrome coil used as a heating element. The wall adapter is 25V 5A DC, which leads to different buck converters that takes it down to the appropriate voltages. The Buck converter that leads to the coil is set to have an output of 15A. The Flow to each component including the heating coil is controlled by a number of MOSFETS. I have noticed that upon testing the heating coil (in testing I have separated the component being tested from the others so in this situation the power goes straight from adapter to buck to coil to MOSFET then back to buck), the MOSFET (which is rated at 30A and 60V) is overheating and melting. I have tried using heat sinks, splitting up the current between multiple MOSFETS, and added PWM to allow the MOSFETS time to cool down, but I can only make the PWM rest periods so long before I can no longer achieve the desired heat. Still with all of this, the MOSFETS heat up and begin smouldering. Because of the amount of MOSFETS I've ruined and amount of dangerous fumes I've inhaled, I feel it is time to turn to the experts for insight. My assumption is the circuit is in essence a short circuit, although I would have assumed the coil would have supplied ample impedance. Does the fact that the buck converter is lowering the voltage mean that excess amperes are being drawn? On a side but still related note, if the circuit is in essence acting like a short and drawing uncontrolled current, will this risk damaging the adapter? AI: A circuit diagram of your setup, particularly including your MOSFET driver would be really nice. However, I'll go out on a limb here and suggest that you need to look closely at your drive circuits. It is entirely possible that you are driving your MOSFET gates from 3.3 volt logic. This is perfectly doable - as long as you are using MOSFETs with logic-level gate thresholds. A lot of power MOSFETs need a minimum of 4 volts on the gate to ensure full turn-on. If you're only giving it 3.3 volts, it will only turn on partially, and will dissipate too much power. It's important that you realize that operating a transistor within both voltage and current limits can still kill it if you don't get rid of the heat dissipated, and at high currents it's easy to generate too much heat. There is a quick check for this (if you want physical proof): Get ready to sacrifice one more MOSFET, but hey, who's counting, right? Drive a heating element full on. Quick like a bunny, measure the voltage across the nichrome and the voltage across the MOSFET. If your MOSFET voltage is not less than about 10% of the nichrome voltage, you're doing something wrong, and less is better in this case. You don't mention your drive voltage, but it has to be less than 25 volts. Let's say it's 20 volts, and let's say the current is 10 amps - this is just to illustrate, OK? Then total power dissipated is 200 watts, and the effective resistance of the total load is 2 ohms. If your MOSFET is fully on, I'd expect an Rds of .1 ohms or less (and this will give a MOSFET voltage about 5% of the nichrome voltage). This would provide a MOSFET dissipation of 10 watts. Without a heat sink, this would kill the MOSFET, so you need a heat sink in any case. And this better not be one of those little U-shaped jobbers, either. You need real heat sink, possibly with a fan. With more airflow you can use a smaller heat sink. You need to consult the data sheet for your MOSFET to determine both Vgs(th),the threshold gate voltage, and Rds(on), the on-resistance when the gate is properly driven. Then you will need the specs on your heat sink, specifically the thermal resistance to ambient. You will also need to do a little research on how to specify a heat sink. As for some of your other questions, consider the two voltages you measured. If they both add up to the nominal voltage of your drive converter, you are not drawing too much current, and you don't have a short. You are just fatally abusing your MOSFETs.
H: ATmega128 Extended I/O This is my fist time working with a "bigger" micro controller, ATmega128. So far I've worked with ATmega328, ATmega8 etc. Reading through the datasheet, I didn't understand couple of points That AVR instruction set can support a maximum of 64 I/O locations. Why is that? Does the AVR instruction set take in 6 bits for an I/O location? To accommodate the extended I/O in ATmega128 (160 extra ones), these 160 locations have been mapped to the internal SRAM right after the 64 existing ones. So in effect out of the 4KB SRAM that ATmega128 provides, 160 is taken up by these extended I/O locations. What happens if the ATmega103 compatibility fuse set (programmed by default)? Do I have access to these 160 locations then? How exactly do I access these extended locations? The datasheet asks to use LD/LDS/LDD and ST/STS/STD. Any examples to this as I've never used assembly before. I am kind of confused how this wold integrate into my C code workflow? Let's say I need to access DDRF and PORTF. Would I need to know their address(in SRAM) in advance or is there some macro in avr-gcc? Any code samples or links to read and understand above would be great! AI: That AVR instruction set can support a maximum of 64 I/O locations. Why is that? Does the AVR instruction set take in 6 bits for an I/O location? Most AVR instructions put their arguments as part of the machine code word. IN, OUT, SBI, and CBI have no more than 6 bits available for arguments. To accommodate the extended I/O in ATMega128 (160 extra ones), these 160 locations have been mapped to the internal SRAM right after the 64 existing ones. So in effect out of the 4KB SRAM that ATMega128 provides, 160 is taken up by these extended I/O locations. Incorrect. All 4kiB is available, 160 addresses further up. What happens if the ATMega103 compatibility fuse set (programmed by default)? Do I have access to these 160 locations then? No. You lose the 160 extended registers (and the SRAM moves back to fill them) plus the last 96 bytes of SRAM. How exactly do I access these extended locations? With normal memory access instructions. The datasheet asks to use LD/LDS/LDD and ST/STS/STD. Any examples to this as I've never used assembly before. LDS R1, $61 ORI R1, $08 STS R1, $61 LDS R1, $62 ANDI R1, $F7 STS R1, $62 I am kind of confused how this wold integrate into my C code worflow? Let's say I need to access DDRF and PORTF. Would I need to know their address(in SRAM) in advance or is there some macro in avr-gcc? AVR Libc handles all that for you. DDRF |= _BV(PF3); PORTF &= ~_BV(PF3);
H: Bipolar transistor switch base current calculation example from PEFI seems wrong? I'm reading through Practical Electronics for Inventors, 3rd edition, on bipolar transistors, and they provide this example of a transistor switch: I'm a bit confused by the calculation at the bottom for the base current: \$I_{B} = \frac{V_{E} + 0.6V}{R_{1}} = \frac{0V + 0.6V}{R_{1}}\$ Shouldn't the base current be calculated as follows? \$+V_{CC} = V_{R_{1}} + V_{BE} + V_{E}\$ \$+V_{CC} = I_{B}*R_{1} + V_{BE} + V_{E}\$ \$+V_{CC} = I_{B}*R_{1} + 0.6V + 0V\$ \$I_{B} = \frac{+V_{CC} - 0.6V}{R_{1}}\$ AI: I'd agree with you. When the switch is on, all the charge flowing through the resistor can only flow through the base-emitter junction of the transistor. The voltage across \$R_1\$ is \$V_{CC}-V_{BE} = V_{CC} - 0.6\mathrm V\$, making the current \$\frac{V_{CC}-0.6\mathrm{V}}{R_1}\$.
H: The various terms for voltage I am trying to get a handle on some electrical circuit diagrams I've been encountering as I try to get back into electronics and building electronic gadgets. I've encountered a number of terms for voltage, and I am not sure what they all mean, so I was hoping someone could explain them to me. Here they are, along with what I believe they mean (if I think I know): Vpp (Voltage peak power?) Vp Vcc Vc Vr Vdd Vi Va Vt V0 or 0V (Negative or common "terminal", source of lowest potential?) There are also the common Vin and Vout designations, which are pretty self explanatory. I've also seen positive and negative voltage indicators in AC circuits, but that is also self explanatory. Are some of these simply context specific, with meanings only valid within a given circuit? Are they all commonly used? AI: As suggested by other answers, most of this is pretty arbitrary anyway Vpp : peak-to-peak voltage (for AC waveforms), historically this would also be used for the programming voltage for EEPROM or flash memory (particularly those devices that did not generate their own programming voltage on-chip). Vp : peak voltage (again, for AC) referenced to system ground, or 0V. May also be used for pull-up voltages (which could also be Vpu). Vcc : positive power supply for many IC's, traditionally this referred to BJT based ICs, the 'cc' referring to the collectors of the integrated transistors. Often this was matched with a negative supply, Vee ('ee' referring to the emitters of the transistors). Vc : Collector voltage for a BJT, similarly Ve, Vb may also be used for the emitter and base while Vs, Vd and Vg may be used for the source, drain and gate of FETs. Vr : reverse voltage, particularly when referenced to diodes. You may also encounter Vz used to indicate a zener voltage. Vf is used to indicate the forward voltage drop of the diode. Vdd : positive power supply for many IC's, traditionally this referred to FET (NMOS, PMOS, CMOS) based ICs, the 'dd' referring to the drains of the integrated FETs. Often this was matched with a negative supply, Vss ('ss' referring to the sources of the FETs). Vi : input voltage. Va : used to indicate an internal analogue voltage point. Vt : May be used for the Thevenin equivalent voltage, or as suggested by WhatRoughBeast the threshold voltage (for a comparator or similar for example), or the termination voltage (also known as Vtt in the case of DDR=type memories). Vo : Output voltage for op-amps and the like. 0V : Zero-volts, not to be confused with Vo, refers to the system ground. Also, Vhsys : may be used for the hysteresis voltage of a comparator type circuit.
H: Can I use a PNP BJT as a current sinking voltage supply? I am using a counter MC100E016 and I need to connect a 3 V supply at the Vtt terminal that is able to sink current. Can I use a PNP transistor with 3V supply at its emitter and the collector connected to the ground with some negative bias at the base as a current sink? The Vtt terminal can then be connected to the emitter of the BJT. Will it work? AI: Vtt must be held at 2V below Vcc, so if your main supply is +5V then you need a negative 2V regulator connected between Vcc and Vee. A PNP transistor with 2.6V on the Base would work, but there is an easier way - just replace each 50 Ohm termination with the Thevenin equivalent of 82.5 Ohms to Vcc and 124 Ohms to Vee (GND).
H: Power calculation of a transformer using only current and resistance If I use \$P = I^2R \$ for primary and secondary will that tell me the effective power being dissipated? Uf I divide secondary power by primary power and multiply by 100%, would that tell me the efficiency of the transformer? I haven't considered the voltage or phase angle of the primary but since the inductive impedance is allowing the power to return to the source is it okay that I do not consider it in my efficiency calculation? AI: Assuming that the magnetizing current on the primary side is negligibly small enough, first define: Pp: Primary side power applied to the transformer. Vp: Primary side voltage. Ip: Primary side current. Ps: Secondary side power applied to the transformer. Is: Secondary side current. Rs: Secondary side load resistance. (All values are effective values.) If I use P = (I^2)*R for primary will that tell me the effective power being dissipated? What is R doing on the primary side? Do you mean the reflected R seen from the primary side? If so, the reflected load is \$R_{ref} = \left(\frac{N_p}{N_s}\right)^2R\$; where Np and Ns are number of turns on the primary and secondary windings. So, the primary side power consumption will be (for an ideal transformer): $$ P_p = V_pI_p = I^2_pR_{ref} $$ If I use P = (I^2)*R for secondary will that tell me the effective power being dissipated? Yes. \$P_s = V_s I_s\$ is true. if I divide secondary power by primary power and multiply by 100%, would that tell me the efficiency of the transformer? Exactly. That's the definition of the transformer efficiency. $$ \text{Efficiency} \overset{\triangle}{=} \dfrac{P_s}{P_p} $$ inductive impedance is allowing the power to return to the source You should consider that, transformers work differently than inductors. An idealistic transformer whose primary side inductive reactance is very high, does not behave as inductive at all. Because, the transformer is designed such that the primary side inductor drains only a tiny amount of current from the supply at its rated frequency when the secondary side is open.
H: Can an ADS1231 be used in place of an ADS1115? I have a working millivolt recording device which uses the ADS1115 16-bit adc. I would like to move up in resolution to the ADS1231 which is 24-bit. The Ads1115 is described as an i2c-compatible serial interface device using SDA and SCL lines to the microcomputer. I've never used the ADS1231. It is described as a "two-wire serial digital interface" using SCLK, DRDY/DOUT, CLKIN lines. Does this mean its easy to exchange chips or next to impossible? AI: "Does this mean its easy to exchange chips or next to impossible?" That of course depends on your level of expertise, invested development effort (PCB made? code completed?) and the amount of rework effort you are prepared to spend. Those chips come in different housing, so if a PCB has been made it will need rework. They also use a totally different communication protocol, so if software has been written it will need rework. The 1115 has a much higher maximum data rate, so that might also be a problem. I have not looked at the electrical interface :) But you could have concluded this all yourself after a 60 seconds scan through the two datasheets!
H: Can I use + charge 3 LiPo cells in series with 3 separate charge controllers? I'm looking at this circuity from SparkFun. It's an mppt + LiPo charger, which is exactly what I need, but I need several cells. I'm actually fairly content to just build 3 of these, and wire each in series, but will that cause any balancing / other issues? I'm assuming as long as each charge controller is grounded correctly (i.e. one's ground is the next one's VCC), it should be fine, but want to confirm AI: Yes, this will work PROVIDED that each cell has it's own PV panel. See below for cautions. Each PV panel has to be connected to Vin and ground of its related charger and NOT to the other PV panels. The batteries can be connected ground1 = system ground, V+1-ground2, V+2-ground3, V+3=Vout PV panels should be nominally identical and batteries 9cells) should be nominally identical. With this arrangement, charging may occur at slightly different rates due variations in panels, insolation variations (which should be minor), slightly different contamination levels on panels etc. This will lead to one cell being charged before the others and one "coming last". If the differences are as small as they should be this will not matter. If there is more than enough sun the others will catch up in the same day. If panels are at say 90, 85, 75% at the start of a day then as long as discharge is not more than 75% of max possible all will be well. If say cutoff point was 3V/cell thgen monitoring cell voltage individually and stopping discharge when any one cell dropped to 3V would be easy and safe. Rechaarging would then satrt at about 15%, 10% and 0% capacity. On recharge, when the 1st cell reached the CC/CV knee its charge rate would slow (if the charger was able to deliver I_chg_max) and the others would start to catch up. As long as on average there is enough sun to sometimes deliver full charge between discharges the system will self balance, as long as V_cell_moin is not allowed to drop below 3V or whatever lower target voltage is set. Wherever imbalance may occur - whether n this system or others, discharging without individual cell monitoring risks discharging one or more series cell to below its safe minimum level. Modern electronics makes per-cell monitoring cheap and easy and there is little point in omitting it in systems where imbalance is likely. Thusly:
H: EL wire inverter to molex inside computer I am trying to rig some EL tape into my computer, but I am unsure where to go at this point. So far, I know that I'm buying this inverter http://www.adafruit.com/products/448?gclid=CPSI1IOS-b8CFZTm7Aodj1IAAQ As well as the EL tape, and it would now take 12 volt DC. But I don't know where to go from here. The inverter just has a + and -, and I know I'll be soldering it, but I don't know what it is soldered to. Should I just get a molex to molex and cut off one end? Sorry I couldn't add more relevant tags, I'm new. Edit-I was essentially asking what else should I get to wire it to molex, of that would work. The answer below is exactly what I was looking for. AI: Search for something like this: Sorry for the image, I could not find anything better. And yes, that's from ebay. This kind of adapter is great because you can put it anywhere without reducing the number of available connectors. Just cut away the fan connector and solder your EL inverter there. warning The +12V line is the yellow wire. Usually fans run on +12V so that should not be a problem, however the connector in the picture seem to run from the +5V, I'm not sure though. Just be careful since you can find connectors that power the fan through the +5, the +12 and even between the +12 and the +5, to get 7V and run it quieter.
H: How can I increase the torque, but not the speed of this DC motor to gear with a potentiometer setup? Short video of the machine running. It's made of three AAA batteries in series giving a total of 4.5V to the DC motor, these are connected through a potentiometer to control the speed. The DC motor is connected to a gear that runs a music box playing Fur Elise, but its too fast, and when speed is reduced, it can't make the gears go around. Off my head it seems there are several methods to get this fixed, I am asking here to get some tips on the best. Possible solutions: Change gearing (what gear setup should I use?) More powerful DC motor Change batteries AI: The very first thing you need to do is to clean up the drive so the gears mesh properly with no wobble. One way to do that is to drill and ream the existing bore of the larger spur gear to accept a bushing with an OD which will give a slip fit in the spur gear's new bore and an ID which will give a slip fit around the music box movement's input shaft. There also needs to be a radial hole in the bushing which will allow the spur gear's set screw to secure the spur gear to the shaft with the bushing interposed between them. Since what you have already has enough torque to drive the badly misaligned gears as well as the variable load of the music box movement, it seems pointless to change the motor or the gearing when all you need to do is to control the motor speed - without compromising its torque - by using a pulse-width modulated (PWM) motor driver. In essence, what'll happen with PWM is that you'll use the 4.5 volt source to drive the motor, maintaining its output torque, but you won't leave it on all the time. For example, if the motor turns at 60 RPM with 4.5 volts across it 100% of the time, it'll turn at 30 RPM with 4.5 volts across it 50% of the time but because you're not limiting the voltage into the motor with a resistor or suchlike, during whatever time it's on it'll output full torque, just what you want. A simple PWM circuit in hardware is shown, following, with the duty cycle adjustable between about 1% and 99%, and the frequency adjustable by changing the value of C1. The LTspice circuit list is here if you want to play with the circuit.
H: Data over a 5V DC power connector I am looking for a simple way to encode some data over a dc power line. (Probably using a very low cost IC) In my design, I have a port that provides 5V, 200mA for connecting some device, but now I like to extend the capability of my port to send and receive some data over that port. The idea I have is to encode data over 5V line. The distance the signal needs to travel is in the order of 10-15cm. Never more than this. I have no data rate requirement but If I could achieve a 1-2Mbps it would be fantastic. If this is not possible, I am OK with a lower transmission rate. This would be a closed loop system, i.e. I control both ends. Therefore, I don't need to worry about standards etc. AI: You could think along these lines: - simulate this circuit – Schematic created using CircuitLab The data you send needs to be DC balanced i.e. something like Manchester encoded or scrambled. The amplitude of the data needs to be significantly smaller than 5V - maybe 1Vp-p. You'll need a comparator/data slicer on the data out signal to recover the amplitude to logic levels. L2 and C1 form a 1600 Hz low pass filter to smooth the DC recovered - this means your data rate is best as high as you can feasibly go and with Manchester encoded data there will be no content at DC up to the data clock rate - whatever frequency that is it ought to be at least 30 times higher than 1600 Hz i.e. approximately greater than 95kbits per second. If you can send M-encoded data at 1Mbps then that would be a lot better. If your data is intermittent then send a preamble of 1s and 0s to get the data slicer aligned correctly with the data.
H: BJT circuit analysis confusion In the following circuit: Ib is given as: I don't understand how they arrived at this. I thought it would be a voltage divider between Rsig and Rb||Rbase divided by Rbase: Can anyone explain? Thanks. AI: I assume there is a DC path for emitter current that isn't shown and, for some reason, doesn't significantly change the AC circuit. The first equation is correct. Looking out of the base, there is Thevenin signal source with Thevenin voltage $$V_{tb} = V_{sig}\frac{R_B}{R_B + R_{sig}}$$ and Thevenin reistance $$R_{tb} = R_B||R_{sig}$$ The signal base current is then given by $$I_b = \frac{V_{tb}}{R_{tb} + \left( \beta + 1\right)\left(r_e + \frac{1}{sC_E} \right)} = V_{sig}\frac{R_B}{R_B + R_{sig}}\frac{1}{R_B||R_{sig} + \left( \beta + 1\right)\left(r_e + \frac{1}{sC_E} \right) }$$ Your approach should give the same equation after some algebra.
H: HDMI and I\$^2\$C I was having a look at the HDMI pinout and I thought: why would they use I\$^2\$C for the display-host communication? My question here is about the design metrics that lead to this choice. HDMI is a quite recent standard while I\$^2\$C is around since 1982. I\$^2\$C is meant for on board, chip to chip communication, moreover the standard allows multiple devices attached to the same bus. An HDMI cable can be long some 15m, so the I\$^2\$C signal should probably use higher than normal voltages to avoid too much noise, adding the necessity of tranceivers on both sides. About the multi device thing, I can't really think how you would attach more than one monitor to a single HDMI port unless you are being very, very non standard. I'm really not an expert in communication protocols but I think that RS485, CAN or some other point to point, full duplex, higher SNR protocol would have been better. So why would they choose I\$^2\$C? note: I know this might be marked as "opinion based", I am hoping that somebody around can think of/knows about some objective reasons. AI: DDC history in HDMI goes via DVI all the way down to VGA. It is implemented in a way that you can simply hook up a standard I²C EEPROM memory chip on the monitor side, which are almost as cheap as dirt (AT24C01 and compatible). I2C signal should probably use higher than normal voltages to avoid too much noise Nope. The +5 Volts tell you a different story. What they might do is a lower clock frequency on the bus. HDMI cables are usually shielded well, too. So why would they choose I2C? It was there in DVI (which HDMI is compatible to) and works and is cheap.
H: Understanding resistors on inputs/outputs of op amps I was looking at the schematic for Dave Jones' uCurrent, and I can't seem to make sense of why there are 270 ohm resistors on the inputs and outputs of the op amps in his circuit. The resistance is low enough that it wouldn't affect the filter in the feedback loop, and is negligible compared to the input resistance of an op amp. Any ideas why these resistors are put in series with the input/output of these op amps? Here's a snippet of the main section I was looking at, with resistors R12 and R8. The full schematic can be found here: Full uCurrent Schematic. Here's a link to full description of the project: Full uCurrent Design Article. The schematic is on page 16, and a short sentence on the resistor in question can be found on page 8 (search for "R10"). Dave mentions that he uses the resistors to provide "output stability". How exactly does this small resistor provide stability, and why? AI: Regarding R10, the virtual ground is driving stray capacitance. If you loaded the output of U2 directly with that capacitance it would probably oscillate. R8 actually hurts stability of U4. It would improve it only if C4 was connected to pin 1, however in most cases users will not load the output capacitively (eg. by attaching a long cable) so it's probably meant to limit the short-circuit current from the amplifier, and keep it from driving the virtual ground too far. R12 limits current in case of a fault such as a large voltage applied to the input- he uses 270R to reduce the number of different part values- but a larger value might be appropriate too. Also perhaps it acts as a jumper for his single-sided layout. ;-)
H: Difference between applying voltage, and voltage across? I'm confused about the two terms, when voltage is applied and across a certain element in the circuit. AI: What Ignacio said is the core of the answer, I hope I can help you out going a bit deeper. Generally the only distinction between "applied voltage" and "voltage across" is how you are dealing with voltage itself: you apply a voltage to a bipole taking a voltage source and putting it in parallel with the dipole. you usually measure a voltage across some dipole, putting a voltmeter in parallel with it. That's to answer your question. Now what if you apply a voltage generator? What would the voltage across it? The answer is: there is no answer. That is a limitation of the model we are using. Ignazio makes the useful example of a diode: you apply 5V but across it there's only something like 0.7V: that's because your voltage source has an internal resistance where the remaining 4.3V drops. Remember that most of the times when you apply a voltage to a dipole, the voltage across it will be exactly what you are applying. The two wordings though does not mean the same thing at all. addendum Since this is at the top now, and I've read some others very good answers, and since the question is very basic I'd like to add two words about potential, a word that every answer uses. A potential is a scalar field associated with a vector field. This vector field must be conservative for the potential to exist, and for the electric field this is true only for electrostatic fields. When things start moving around no potential can be defined. I don't want to be the fussy physicist but a professor once throw a chalk at me for this imprecision (he was quite precise) so since this might be seen from young students I though this should be pointed out.
H: Heatsink or IC: how to determine root cause of overtemp? I have a manufacturing situation where we perform a functional test on a board and we are getting frequent overtemperature failures from a BGA package with a heatsink on it. I would like to be able to determine if the cause of overtemperature is because of a bad thermal contact with the heatsink OR if the cause is from the IC itself generating more heat than we expect. Here's the details: Large BGA package that dissipates A LOT of power. Very sensitive to heat sink seating BGA package is a part that is picked by our supplier to meet our specified voltage/power requirements. There is variation in power dissipation across devices. Unkwown if this variation is caused by heat-sink application or differences between individual IC's. Device has characteristics of thermal run-away? Higher temp and higher current consumption go hand-in-hand (voltage rails are steady). Heat sink is a copper vapour phase chamber with fins. TIM is a high-performance thermal grease. We have a controlled environment in a chassis with fans forcing air at a constant RPM. I have a way to measure die temperature of the device to a resolution of 1C. And I can heat-up the device "at will" by running an automated test. What I would like to do is to perform a test that checks the efficacy of the heat-sink to rule out the heat sink (or TIM or seating) as a problem. One way to do this is to re-apply another "known-good" heat sink and retest, but that is dependent on operator skill for repeatability, and has other manufacturing workflow problems. Here's an idea for measuring the effectiveness of the heat sink, I'd like to get some input on whether it will be a good idea and/or what would be a better way to test this. The device has a "textbook" heat-up/cool-down curve that fits nicely to RC-time-constant. In the plot below, I have the device starting at "idle" then I make the device "do its job" in a functional test and then turn off the function after 5 minutes. I am most interested in the cooling curve because when it starts to cool, I know that the core-part of the IC is no longer generating heat. The cooling curve is just the package cooling down through the heatsink and PCB. I assume that the heatsink dominates the heat transfer especially early on. In other words, the cooling curve is a measure of the cooling performance of the heat sink and not much else. Moreover the other variables across tests have less variation than the heatsink (eg cooling through PCB). When I normalize the curves to range between zero and one, set the time origin to onset of cooling and look only at the first 80 seconds of cooling, I get nice straight lines in a log plot. Time constant in a cool-running device is 36s with standard deviation <5% over a dozen runs. Time constant in a device where the heat sink has been deliberately impaired to run a few degrees hot was 39s with similar standard deviation. Now the question if I get a hot-running device and I measure time constant that is the same as a cool-running device, can I rule-out the heat sink and its application as a problem? I should clarify that this is in a manufacturing context, not design (DVT). The focus is to be able to determine the cause of failures. AI: Maybe, maybe not, but I'd ask why you are not correlating hot chips with power supply currents, and why you're not putting a temperature sensor on the heatsink. If the thermal path from the die to the heatsink is impaired you'll get a different temperature differential between the die and the heatsink. Likewise, if the chip is drawing more current you should be able to predict the final temperature of the die based on normal thermal behavior. And measuring the heatsink temp doesn't require a dedicated contact sensor: a temporary one will do, or a non-contact IR unit should work, since the emissivity of the heat sinks should be pretty uniform. As to why the maybes, consider the following model: simulate this circuit – Schematic created using CircuitLab If the thermal resistance from the die to the heatsink is much larger than the thermal resistance of the heatsink to ambient, and the thermal capacity of the die is much less than the capacity of the heat sink (and I would guess both to be true), the latter is the dominant factor in determining the thermal time constant of the heatsink, and thus of the die. In this case, increases in the die/HS thermal resistance will have only small effects on the time constant of the die, but will cause the die to get hotter. You'll have to figure the values for your board to see if this is the case.
H: Actuating a relay (either 5V or 12V) with 3V I am trying to figure out how I can actuate a high amperage 12V relay with only the batteries I have on board. Basically the idea is here to have a 3.7V lithium cell, a momentary switch to actuate the relay, and then let the current flow from the lithium cell through the relay to the load. Is there any way I can do this? I'll be dealing with loads up to 60 ampere, and a lithium polymer battery that can support that kind of discharge. AI: This can boost 3.7 to 12V, about 1 to 2A. Enough to drive sizable relay. Not sure if relay is needed. Can simpler mechanical switch do? http://www.coolcomponents.co.uk/adjustable-4-12v-step-up-voltage-regulator-u3v50alv.html#.
H: Schematics for a simple radio transmitter that can be built only with passive components? Fantasized in some of the survivalist, post-catastrophic novels, movies or docudramas is the radio built only with handcrafted or scraped components. Beyond the popular appeal of this stories, I think there is a core of truth and passive elements are easier to scrap and handcraft, can easily be read and calculalted without a multi-meter or can be easily reused. Resistors/condensers can be put in series or in parallel. Coils can be uncoiled and re-coiled from transformers other coils etc. Is it possible to build a very minimalist radio transmitter that can be built only with passive components, what would be the schematics for that? AI: Is it possible to build a very minimalist radio transmitter that can be built only with passive components, what would be the schematics for that? Sure. You can switch transients into a resonant circuit, as others have mentioned. The trouble here is that you end also transmitting a lot of those transients, which means a lot of broad-band noise. There's also the trouble that spark-gap transmitters are not permitted by international treaty. (Not that regulations matter much in a hypothetical post-catastrophic scenario, unless it's the authoritarian regime with a special emphasis on radio regulation variety) Interestingly, one can make a CW transmitter (that is, one that transmits just one frequency, without the broadband noise) with passive components. All you need is a high frequency voltage source, which can be generated electromechanically. For a real, working example, see the Varberg radio station. It is, quite simply, an AC motor, coupled to a generator to a 1:3 speed-increasing transmission, with the generator having 976 poles. The generator spins at about 2133 RPM (35.55 revolutions per second), and with 976 poles that makes at output at \$ 35.55 \cdot 976 / 2 = 17.35\:\mathrm{kHz} \$. The generator's output is switched into an antenna to transmit a tone, or into a short for the space between the dits and dahs. There's a bit of additional stuff to provide a matching network and control the motor, but it's all passive. As for the premise of the question, that passives are easier to scrap, that may be somewhat true. However, discrete active components, like transistors, are not so hard to scrap either. Certainly, easier than building an electromechanical transmitter. A transistor radio can run off batteries, is more portable, and energy efficient. So in most ways that I imagine a post-catastrophic world, I would probably end up with a transistor radio.
H: Space Vector Modulation Im looking to learn more about this topic so that I can implement it in an amplifier design. I've read around and understand most of the math behind it, but I'm still a little fuzzy on how to implement it. Here are somethings I would like to know to start with: Is this a strictly code-related method generating PWM signals, or is there a way to implement this in hardware? Are there certain methods of commutation that do not work with this scheme (such as HEDs, encoders, or resolvers)? If anyone has any links that would be useful in helping to further understand SVM, I'd greatly appreciate it if you could share them with me. AI: There is a whole bunch of information at Texas Instruments InstaSPIN including white papers as well as product data and application notes [NB: I do not work for TI and do not benefit from posting this] There are 5 videos of a Texas Instruments training course at YouTube called "Teaching Old Motors New Tricks" IIRC, The first three deal with BLDC motors, and modelling those systems, and the last two, other systems. They have also uploaded a series of Control Theory videos, but it sounds like you understand that stuff. It is often possible to do a specific, fixed algorithm in hardware. For example, you could implement using an FPGA. However, if you watch the TI videos, one of the speakers says that in the 90's, some microcontrollers came with specialised vector space support hardware. However he goes on to say, it was proven in 1996 that centre-aligned PWM provided equivalent benefits, so manufacturers stopped including that specialised hardware. Now the problem you are considering may be more complex, so I would recommend watching the video, or reading the paper, and deciding for yourself.
H: Is there a difference in heat produced on a conductor between AC and DC at the same amperage and voltage? Is there a difference in amount of heat produced on a conductor between AC and DC at the same amperage and voltage? For example if you ran 50A at 480V through a conductor would the heat produced be the same for AC (60hz) and DC? AI: Yes... assuming for AC you mean 480V RMS, not peak-to-peak. The power (in watts) should be: 480V * 50A = 24kW However if you are measuring 480V AC peak-to-peak, the RMS voltage will be ~340V, in which case the power would be less: 340V * 50A = 17kW AC voltage and current is continuously variable in a sine wave. Power is calculated using RMS so that it can be compared to DC in a useful way.
H: Topology for a DC-DC step up converter for a single AA battery with 50V 10mA output I need to design a very compact step up DC-DC converter for the single AA battery (input voltage 0.7-1-5V) with an output of 50V 10mA. The load should drain in a typical case about 2mA, but in certain cases, the load could drain even up to 10mA. The goal is to achieve the most efficiency possible. What topology do you suggest to use for it? Maybe you have some design ideas. AI: This is not going to be easy since it's hard to make semiconductor circuits that switch nicely at only 700 mV supply. I'd probably use a boost converter chip that can make 3.3 V from the 700 mV, then run the control electronics off of that. Now you can use widely available digital logic to create a nice clean on/off signal that drives the final power switching element in the boost supply that makes the 50 V. Unfortunately you are still left with two inconvenient choices. If you use a NFET as the switch in the power boost converter, it will require probably around 10 V to switch properly. Look around, but I don't think you're going to find a "logic level" fet that can handle the 50 V you need. The other option is a NPN transistor. That can easily be switched from 3.3 V logic and withstand the 50 V, but to maybe 200 mV C-E drop will be a good fraction of the 700 mV input. The efficiency will be quite poor, at least at low battery voltages. I think the deciding factor is how important efficiency is. If it is a high priority, then a more complicated drive circuit using a NFET will be necessary. If run time isn't that important, then the NPN switch will be easier and simpler to drive.
H: Resistor required for capacitor? I'm currently working on some simple circuits to learn about circuitry, and being a Physics major I have a pretty good grasp of the mathematics and concepts, but I'm a little confused about capacitors. I understand how they work, but am curious about their placement in a circuit. As a simple example lets say I wanted to charge a capacitor in a circuit quickly and then discharge it slowly. Obviously I would slow the discharge rate by connecting the capacitor across a resistor, but the charging process is where I have a question. Would I still need a resistor? I feel like it depends on what the input to the capacitor would be, and resistors could be used as needed to control the rate of charging. Even still I feel like having a resistor would be a safer approach. Not really sure. AI: Well, you know that to charge a capacitor instantly will require infinite current, right? And we know that all real-world power sources have some finite resistance associated with them, as do capacitors (ESR). However, as your intuition suggests, if you just pop your capacitor across a supply you are depending on the parasitic resistance to limit your current. If it doesn't limit your current enough your supply may not be able to handle it and you can get droop or brownout. So it's better to control the charging impedance to a known value that doesn't stress the rest of the system. Sometimes, especially in AC/DC supplies a positive tempco thermistor is used to limit the inrush that charges the main cap. As it heats up the resistance decreases and the circuit begins operation.
H: Why is this 5V voltage regulator not outputting 5V? I purchased a bunch of LM330T voltage regulators and in my testing, it's outputting about 5.9V when I feed it 6.5V. It's listed as a 5V regulator at Digikey: LM330T-5.0/NOPB Here's the data sheet: LM330 I must not be understanding something very fundamental here. Can anyone explain? AI: Two things: It sounds like you aren't using an output capacitor. Note the schematic on page 4 in the datasheet. "10 µF is the minimum value required for stability and may be increased without bound." Without this capacitor, the output is likely oscillating, which your meter might read as a funny voltage. It sounds like you have no load attached to the output. Load regulation is only specified for loads between 5 and 150 mA. Below 5 mA, the output voltage is likely to increase, at least slightly.
H: Does Thevenin's theorem apply in power plants? I know when we analyze a circuit's Thevenin equivalent, the best we can ever get is 50% power transfer to the load. I would assume that this holds true for power plants (source) and the rest of the grid (load). I realize this is a fairly simplified view. Anyway, I'm taking a thermodynamics class in mechanical engineering this summer and we toured a cogeneration plant on campus. I asked about this at the plant, but neither the plant manager nor my thermo professor seemed to know what I was talking about. This graphic below was produced from data collected by the DOE at LBL. The 62 units lost in the power plant could mean imply the efficiency of a typical power plant, or be half of the current maximum of about 60% efficiency divided in half due to Thevenin. Any way, this is just speculation. I'm hoping a power engineer can weigh in? AI: I know when we analyze a circuit's Thevenin equivalent, the best we can ever get is 50% power transfer to the load. That's a misunderstanding of the maximum power transfer theorem according to which the maximum power that can be delivered to the load is 50% of the maximum power available from the source. But that doesn't mean that the best power ratio is 50%. For a Thevenin voltage \$V_{th}\$, Thevenin resistance \$R_{th}\$ and load resistance \$R_L\$, the power delivered to the load is $$P_L = \frac{V^2_{th}}{\left(R_{th} + R_L \right)}\frac{R_L}{\left(R_{th} + R_L \right)}$$ which is indeed maximum when \$R_L = R_{th}\$. However, the power developed by the Thevenin source is $$P_{th} = \frac{V^2_{th}}{\left(R_{th} + R_L \right)}$$ and so the fraction of the source power delivered to the load is thus $$\frac{P_L}{P_{th}} = \frac{R_L}{\left(R_{th} + R_L \right)}$$ So, for \$R_L >> R_{th}\$, almost all of the power supplied by the source is delivered to the load (though this power is much less than the maximum power available from the source). I don't have the expertise to address the reasons for 62 units of energy lost in the power plant so this answer is just to address the proper interpretation of the maximum power transfer theorem.
H: How to place a hole at an angle in Eagle I am trying to create my own package in Eagle 6.6. I have to place holes at specific angle of 36 degrees. I need to place 10 holes on the circle with an angle of 36 degree to each hole. The package I am trying to create as shown below: I have created a circle with 5 mm diameter. The first hole I placed at position (0 2.5) so its top and bottom hole also like (0 -2.5). I am confused about how to place next hole at an angle of 36 degrees. I had placed 10 holes on the circle but those are not at correct angle to each other. The package I am creating as shown below: Can someone help me with this? AI: x = r cos(angle), y = r sin(angle). Now, cos(18) = 0.95. sin(18) = 0.31 . From that you can calc the positions of the four holes above and below the horizontal midline. Proceed similarly for the other four holes, using (18+36) degrees.
H: Outputting a clock signal from an FPGA Referring to question here: Click here, I'd like to use the 16 channel LED driver to run my 7-segment displays. I'm using a Spartan 6 LX9 FPGA to implement a 16-bit microprocessor that will take care of it all. The development board I'm using is pretty barebone and has an onboard 100MHz clock, or perhaps I should say oscillator. What I was planning to do is slow it down to perhaps a few megahertz, then output it via a GPIO to the LED drivers and synchronize it that way with a separate verilog module. My question is, do I have to take special care when trying to output a clock through a GPIO? Perhaps use a particular pin? I'm given to understand that the clock input is supposed to be through a predefined pin. The clock could be slower, as it doesn't really need to be too fast. Here's the online user manual to the development board (pretty basic): Click here. EDIT: More information: I'm concerned about clock skew. The chip I'm looking at writes at every positive clock edge. What I plan to do is to divide up the clock and send it out to the chip, while the FPGA also uses the same signal to synchronize and send data to the chip serially. Timing seems to be pretty tight. A slow enough clock might be the answer though. AI: You can use any GPIO pin you want for output. It doesn't matter if it's balanced clock or arbitrary data. Also, output at units of MHz shouldn't give you any trouble. In sequential logic, there is a trend to use as little of clocks as possible. This is because clocks are very often the most important signals, as many others depend on them. I'll try to depict here the issue of propagation delay, as projected to the thing called clock skew. Imagine you have two clocks, one derived from the other: __/--\__/--\__/--\__/-- <- original clock ___/-----\_____/-----\ <- clock divided by 2 Note that the clock below is delayed by 1 character (that would be one flip-flop delay, which is minuscule.) It's just greatly exaggerated in the ASCII art. But, in a black-boxed FPGA design you don't have to worry about that. It's all taken care of for you. All those worst-case timing issues are taken into account in the form of maximum clock frequency, that the design can run at.
H: Trying to Understand Spurious Sideband Interference (and some causality problems...) Suppose I have a radio and I intend to transmit a 0.1 µs pulse on a 100 MHz carrier. Fourier theory tells me that this signal will produce some sideband interference resembling a Sinc function. Suppose I have a friend Bob who is using an adjacent carrier. These spurious sidebands will be received as interference for him. However, if I decide to lengthen that pulse to 0.2 µs, the sidebands get "squished in." As a result, Bob hears a lot less interference. Here is where things start to get less intuitive for me. When does Bob start to hear the interference? My gut instinct says he will notice interference at the very start of the pulse. However, Bob cannot see into the future, so he cannot know how long my pulse is. And if he doesn't know how long my pulse is, he can't know how much interference is resulting. I am forced to conclude that the interference does not formally "exist" until the very end of my transmission. However, this seems to go against experience and intuition. Is there a better explanation for all of this? Maybe some time-frequency analysis might explain things better, but I am still at a loss here. AI: Your gut starts good, it is the leading edge of the pulse that upsets Bob. You will not, or should not, be surprised to learn the trailing edge gives his receive filters a similar but inverted rattle as well. Now there are several ways to look at what happens with these two events he sees. First a simple hand-wavy power ratio approach. With a longer pulse, there's more total energy in it. However the energy in the leading and trailing edges stays the same. So the ratio of adjacent channel interference to transmitted power will fall. Now a more detailed 'what about the zero power in the sinc spectrum'? You will notice that your sinc function crosses the zero axis, so that at some frequencies there is zero power. These frequencies are offset from the centre frequency by a function of the length of the pulse. At these pulse lengths, the leading and trailing event are so timed in his receive filter that the second one cancels out the first. At the peaks of the sinc function, the second event reinforces the first for a bigger filter output. So, you might be asking, how can two events which cancel each other out in his receive filter, one after the other, really be called zero? The problem is you don't really know what you have in the frequency domain until you have waited long enough. How long? Several times longer than 1/f, when you are trying to resolve frequency differences of f. It turns out when the leading and trailing interference effects cancel, there is indeed zero power at the cancelling frequency, but there are sidebands above and below which contain the definitely non-zero power. The filter has to be wide enough to accomodate these sidebands as well. If it was narrow enough to reject them, then its response to the first event would be such a slow buildup (narrow filter = long time delay) then by the time the trailing event arrived, it would cancel a signal that was still way below 'full scale', and was actually zero in the limit of a zero-width filter. Of course in communication systems, we need filters to respond in a finite time, so it's not even theoretically possible to use a zero-width filter, let alone not a practical possibility.
H: How to get more current from 555 timer? My power supply is filtered regulated 5 volt 500 mA . I am using a 555 in monostable mode to switch the motor on for a certain amount of time after the 555 is trigerred. (The 555 is being trigerred by a counter circuit). But the output current from the 555 is too low. How can I use a transistor like 2N3055 to get full 500mA current? What other ways are there to achieve the same thing? Will this work? AI: An emitter follower will have a voltage in the emitter about 0.6v lower than the voltage at the base. It will work if you don't have a problem with the reduced voltage level. Note that a 555 with 5v supply can have an output as low as 3v depending on the output current. The alternative is to use a transistor or mosfet as a low side switch (switching the ground side of the load) simulate this circuit – Schematic created using CircuitLab The mosfet will be a better solution (more efficient) because it has a lower voltage drop across it (drain-source), and it doesn't need a constant current in the gate (for static operation) as the transistor does. Just select a logic level Nmosfet so that it can turn fully on with low resistance
H: Female-female connector, but what is it called? Question What is the "official" name for this type of connector? (the large black box in the image) More info... I need the connection to connect two PCBs together as one was missing from a system that I purchased. The connection links between the male header pins on two circuit boards and therefore needs to be female-female. After an extensive search on-line I've come to the conclusion that without knowing what it is called I'm not going to be able to find them. AI: They are officially called Gender Changers but they are commonly referred to as GenderBenders (which may or may not be PC these days...)
H: Calculating Z_T (total impedance) Using a mobile device so I can't upload a picture. I need to find Z_total in a circuit. I will simplify the circuit description so that its easily pictured. The simple circuit contains The 5<20 phasor current source ("<" representing angle here). The source is in series with a 10ohm resistor and an inductor labeled 30ohms. This is a simple single-loopb. I've never seen this before. Normally I see diagrams with inductors as impedance values (ex: j30), or in henry's, with which I would calculate impedance with j-omega-L. How can I find Z_total for such a circuit? I tried just summing 10+30 =40 ohms, but that is wrong. I also considered maybe its reactance, meaning omega-L = 30, and multiplying by j to get Z_inductor, then summing that with the resistor to find Z_total, but that is also wrong. This is on an exam review and we have the answers, but not procedures. I don't have the answer to my described circuit, as the actual circuit is more complex and contains more elements. The common problem though is all the caps and inductors are labeled as ohms, not reactance, impedance or farads/henries. Not sure how to interpret this. I will try to log on later and attach the actual circuit. Thanks AI: When there is an inductor impedance and a resistor impedance in series the total value of the impedance is: - \$Z_{total} = \sqrt{R^2 + X_L^2}\$ Where, in your example R = 10 ohms and XL is 30 ohms (reactive) making the total impedance 31.623 ohms
H: How to generate Red light from a Blue-Dominated Spectrum I have a white LED light source, the spectrum of which looks as follows --- want to filter out red light from this white LED source. But as you can see the spectrum dwindles in the red zone. And therefore, when I put red filter (absorptive filter) in front of my white LED light source, the brightness suddenly drops down to abysmal values. Is there a way in which I could use this spectrum and get red lights without compromising on the brightness? I thought about interference and then I came across dichroic filters. But I'm not sure if that'll do it. I also came across these filters called LEE LED filters which are filters specifically meant for LEDs. But I do not understand the scientific principle they work on. I would be grateful if you can follow the link and explain to me the scientific principle. From the spectrum shown in the link, these filters can give you more than 85% red. But the overall transmission is just 10.1% of the original intensity. So it's kinda bleh. Or maybe I don't understand properly. A red LED light source would be just perfect for me, but a high wattage (50W-100W) red LED light source is hard to come across in the market. Let me know if there are other options you guys can think of, for producing cool red LED light. Right now I use a tungsten lamp with RED absorptive filter, but there is so much heat (infrared) in it that my houseflies get fried up. [ I work in an insect flight lab and we use red light for high speed videography, since houseflies dont see well in the red zone]. So I want to switch to LEDs. AI: As poster states, A very bright RED LED lamp (~50W) with a lamp tripod or something would be perfect for me. Given the fact that one can easily buy off-the-shelf, 50 to 150w, LED flood lamp, with heat sink and drive electronics, but they typically comes with white LED. One may consider simply replacing the central coin sized LED module and make corresponding small change in drive V and A. Either do it yourself or if, in (small) batch quantity, manufacturer can do it for you. For example, I am using this 10W LED, which is 9 LED die, 3 series in one set and 3 set in parallel. It is 350mA, 10V. The constant current driver board is adjustable up to max. 5A. The RED LED module shown (which I do not have on hands, but, same principle as the 10W I have) is 10 LED by 10 LED, each die is one 1W LED, module is about 32V and 3.5A. The example spec. is white LED (which is actually blue then changed to white) and voltage is slightly different from red but is same range 3.x to 4.x V Commercial 100W driver with CE, 85 to 250V AC in, out DC 20 to 38V, 3A
H: Dealing with USB VBUS on a 3.3v board I have an STM32 board I am currently designing which is going to use the USB bus for communication and programming. The ground connector on the USB will be tied to the board ground, and the D+ and D- lines will go to the STM32's USB_DP and USB_DM with a pull up on the D+ line to signal full speed. What I want to know is what to do with the USB VBUS line. I have no need for the board to be running off of the 5v USB power, it will always be connected to an external power supply. However I have read that the VBUS line may be important for communications, so I do not want to leave it floating, as that may make the communications inoperable. What is the best thing to do with the VBUS line in a situation like this? AI: According to the reference manual for the STM32F205, page 950, you want to do the following: First, you want to disable the VBUS sensing option. This is done by setting the NOVBUSSENS bit in the OTG_FS_GCCFG register. In this case VBUS is considered internally to always be at VBUS valid level (5 V), so you don't need to run a trace from the VBUS pin on the USB connector over to the STM32F205 at all. Note: you can also skip this step, and tie VBUS from the connector to the VBUS pin on the STM32F205, keeping the VBUS sensing capability (so, for example, you can tell if a USB cable has just been plugged in). Just don't implement the voltage regulator as shown in Figure 351 of the Reference Manual so your device doesn't try to power itself off the USB. You can also disable the OTG capability and force a peripheral-only mode by setting the force device mode bit in the GlobalUSB configuration register (FDMOD in OTG_FS_GUSBCFG) to 1, forcing the OTG_FS core to work as a USB peripheral-only.
H: How do I solder "cable shoes" onto a circuit board? I've got a timer (http://m.ebay.co.uk/itm/221094671882) with male cable shoe connectors (see image). I've connected the timer with other devices using wires w/female cable shoe connectors. Cable shoe connector example. I'd like to attach the timer directly to a circuit board of my own design. What type of connector do I solder onto the circuit board? Should I just take any female connector, strip off the plastic coating, and solder it onto the board...or is there a more specific product for this? AI: You want a female quick connect tab, PCB mount, vertical. Something like this, but check the width is correct, they come in different widths. http://www.keyelco.com/product.cfm/Vertical-Entry/3575/product_id/678 TE call them Fast-on tabs for the male, receptacles for the female. I've been in the electronics industry 24 years and have never heard them called "cable shoes".
H: What are the big plugs for that you find on high demand devices in North America (120V-Areas)? Talking about AMEA socket system and 120V. High power demand devices like A/Cs do have bigger plugs with some switches on them. What exactly is their purpose? Seems like a fuse but isn't the socket cirquit fused against overload? AI: High power demand devices like A/Cs do have bigger plugs with some switches on them. It sounds like you are talking about the plugs with a built in Ground Fault Interruptor designed to automatically disconnect the circuit if current flow from the "hot" line is diverted anywhere other than the "neutral". It's not really the "high power" which suggests their presence, but rather the possibility of a current leakage path developing.
H: Standard Relay vs SSR vs TRIAC for Switching CFLs I'm confused weather I should go for a standard electromechanical relay (EMR) or a sold state relay (SSR) to switch CFLs @ 220V with 100W max load. This is for a commercial product with the following objectives: Have at least 7 years of life (of continuous use) Resilient to short-circuits at the outputs of it's switching device (SSR, EMR or TRIAC). The main concern here is the failure of the device: SSR will provide an extremely long life (if other components are up to it) whereas a EMR will have a relatively shorter life-span. However, it seems EMRs are more rugged to a short-circuit (till the circuit breaker or fuse pops, at least). A failure mode of CFLs I've found is that they sometimes short-circuit - either the internal wiring insulation fails or some other type of fault due to sub-standard components). I suppose the question becomes: how long can I expect an EMR with a resistive electrical contact life of 100,000 cycles to work if the relay is switched 10 times a day? Alternate approach that I had in mind: use a large TRIAC (rated for 30+ amps continuous) and a resettable fuse, rated for 500 mA, in conjunction with it. The fuse interrupts the current flow thereby saving the TRIAC from damage. Since the TRIAC is a heavy-duty one it has a higher likelihood of surviving such a failure. Will the TRIAC allow me to have my cake and eat it too? AI: It might be on the optimistic side to expect the resistive life to apply to a CCFL load, however an overrated (eg. 10A or 16A) mechanical relay with any kind of a rating for a tungsten load will probably last just fine with a 0.5A CCFL load. If it was purely a resistive load you'd probably get in excess of 10^6 operations. Check the life curves in the relay datasheets.
H: What are UL Contact Ratings in a Relay Datasheet? These UL contact ratings are listed in the datasheet for a Tyco relay: T90 Series, 30A PCB Relay. (Note: the link will ask you to download the PDF, as do most Tyco PDF links) The contact life is different for different loads and this is as expected. However, if you look at the ratings for the 'ballast' load type notice that at 6A it's specified to 100,000 cycles yet at 3A it's only 6,000. Why so? Don't lower currents usually increase contact life? Furthermore, for CFLs, should I be looking at "ballast" load-type? AI: To answer your question about whether a CCFL should be considered a 'ballast', I think not. Very early CCFLs had inductive ballasts in them, however they were heavy and bulky and did not last long in the market once electronic ballasts became competitive. I think the rating applies to old-style reactance ballasts and not the modern-day electronic type. It's not uncommon for the N.C. contact rating of a relay and/or the life to be considerably less than the N.O. contacts of the same relay.
H: Costs of microcontroller with GSM module First of all, I have to confess that I am a noob on this topic ;) Currently I try to realize a pet project with a microcontroller, which is connected to a GSM module. I've done some research and found several GSM modules, which can be used with AT Commands - e.g. SIM900 or SIM908. But I only found suppliers selling these modules at 15 to 40 euros. My objective is to develop a hardware, which easily could be scaled in production. But using a GSM module, which costs more than 15 euros, would blow away this idea. Now I have two questions: Unfortunately I have no experience if these modules are so much cheaper if I buy them in higher quantities. How realistic is a price of 1-3 euros per piece and how many would I have to buy? Which components do they use in mobile phones? You can get smartphone for as low as 40 euros. And a smartphone not only includes a GSM module, but also more expensive components (display, etc.). So how do the smartphone manufacturer handle this? Do they use a GSM module, too - or do they implement their own solution. So you see, I am relatively new to this topic. Maybe someone could give me a hint how this industry works? ;) AI: It is possible to make a cell phone, perhaps not a smartphone, for a very low cost. For example, the Nokia 105 supposedly costs only $14.20 to make. But as the article says, this does not include any software costs, licensing or royalty fees (see my comment under the question for more information on that). For a company like Nokia, this could be a fixed cost amortized over millions of devices. The GSM/RF parts are listed as around $5. But you'll never be able to buy these chips yourself for anything close to that. This article indicates the lowest BOM for a smartphone in China is currently $22. Now when we're talking real smartphones, like the Samsung Galaxy S5 and the iPhone 5, things get a little crazy: the BOM for an S5 is estimated to be around $250, and the iPhone 5 around $200. That is why they have to retail for $600 or so.
H: How does voltage drop work? I've been working with electronics for a while now, and I understand how to work with/account for voltage drop. But I'm still mystified as to how it works, particularly with components in series. Say I have two small incandescent lamps in series hooked up to a power supply. Through what physical mechanism is the voltage spread evenly over both lamps, resulting in both lamps running dim, instead of the first lamp consuming all the power and running at full brightness and the second running dark? EDIT: It seems that people were getting hung up on the fact that I was using LEDs in my example and were focusing on whether the LEDs would run at all with lower-than-rated voltage. This wasn't my intention - I am focused on the phenomenon that both drop an equal amount of voltage instead of the lead LED dropping more voltage than the second LED. As a result I changed the example to incandescent lamps which show a closer-to-linear relation between voltage and brightness. AI: Through what physical mechanism is the voltage spread evenly over both lamps, resulting in both lamps running dim, instead of the first lamp consuming all the power and running at full brightness and the second running dark? Ohm's law. Both lamps, presumably having been manufactured similarly, have similar resistances. Given that both have the same amount of current running through them (KCL), they will have similar voltage drops across them.
H: Breadboard resistance and TMP36 sensor I got a strange issue with my Arduino, and more specifically with my breadboard. I got the following circuit: +5V |----[TMP36]-----| GND | | A0 Where TMP36 is a thermoresistor (datasheet). When I got this circuit, everything is fine and I got correct measures (temperature of 24°C), which corresponds to a value of 148 (on 1023), namely 0.72V. Yet, as I would like to add another sensor on my breadboard, I plugged this same sensor between the Power and Ground lines. Here is my schematic: +5V (Arduino) ------ Power line (+) | | [TMP36] ------ A0 | | GND (Arduino) ------ Ground line (GND) Here comes the trouble. I got an output value of 81 (so 0.39V). Pretty bad, as corresponding temperature is -11°C. I can't see any reason for such a difference, but I'm an electronic newbie. :) Have you any ideas on why such a behavior? EDIT: adding the breadboard schematic: AI: First, this device is not a thermoresistor, better known as a thermistor. It is a integrated temperature sensor that produces a voltage output. Whatever is inside that measures the temperature is irrelevant to you, and quite unlikely to be a thermistor. Second, it's called a schematic. As for why you're not getting the expected reading, the obvious answer is that you forgot to connect the output of the sensor to a A/D input of the microcontroller. You only have power and ground connected. Hook up the second sensor just like the first, except connect its output to a different A/D input. Another issue is that some variants of this device have a shutdown input. Make sure that it either floats to the appropriate level, or that you tie it to the appropriate level explicitly. The datasheet should tell you if it is OK to leave open. Also, you are missing the bypass cap. Put a 1 µF or so ceramic cap accross the power and ground of each sensor, as physically close to the sensor with as short leads as you can manage. I'd probably put a small chip inductor in series with the supply first, then a 10 µF cap around the part. Integrated parts like this usually have some power supply noise rejection capability, but a chip inductor and cap will clean up the power the sensor sees, and are cheap and small.
H: Using current measurments for smart charging a battery I've been doing some reading on smart charging a battery, and I hope to create a system that can charge batteries with the use of a solar panel. Most of the reading I've encoutered utilises the "Negative delta voltage" on NIMH batteries to determine when the battery is nearing its full charge. But from what I've read this voltage drop is very small and hard to detect, so other failsafes are put in such as timers. I've also read that as a battery charges, it begins to let less and less current through. So my question is, why does nobody seem to use a current measurment on the battery to determine how close it is to being full for smart charging applications? AI: There are a lot of methods used that make use of current information. Coulomb counting, impedance tracking, and other methods. Sometimes there's a battery gauge IC that works in conjunction with a charger to determine state of charge and when to terminate. You can find a lot of information on this TI's website. Maybe start here: Battery Fuel (Gas) Gauge Overview Then poke around for more info on the various charger ICs.
H: Spectrum Analyzer vs. numerical FFT-based methods? We sample a signal over time (e.g. voltage fluctuations of a resistor). Then do a FFT-based Power Spectral Density in a PC. What advantage would a physical spectrum analyzer provide over doing the spectral analysis numerically in a PC? (I am NOT interested in a real-time analysis, I only do post-analysis) AI: The main advantage of a true spectrum analyzer is probably dynamic range. Using digital sampling, you have an inherent noise floor due to discretization noise. This floor is somewhere around 1/2n of your peak-to-peak signal amplitude, where n is the bit size of your discretization. For 8-bit sampling this about -50 dB below your signal level. Using rf techniques you can easily (but not cheaply) find instruments capable of 150 dB or more of dynamic range. Also down-mixing may allow more cost-effectively testing high frequency (GHz and up) signals, which would require excessive sampling bandwidth to test with an FFT-based scheme.
H: Combinational logic delay is greater than clock period Suppose you have a combinational circuit between two registers driven by a clock. What will you do if the delay of the combinational circuit is greater than your clock signal? (You can’t resize the combinational circuit transistors). My answer to this is that we should introduce a buffer between the clock and the second flop such that: Tnet + Tc-to-q - Tskew <= Tclk where Tnet=delay of the combinational circuit Tc-to-q= flip flop delay Tskew = difference in clock arrival timings at the two flops Is there a better answer? Thanks in advance. AI: Delaying the clock of the following register would steal time from the next stage in the pipeline, imperiling the setup time of the register following that. If you know for a fact that the next stage has a sufficiently short propagation delay it could work, but it would be cause for a lot of caution and explicit warnings in the design documents, less someone later revise that next stage on the assumption that they have a normal clock interval to work with. A more traditional / textbook answer would be to add an additional pipeline stage by inserting a register somewhere in the middle of the long combinatorial path, breaking it up into two paths each of which will meet timing (or, in an extreme case, inserting multiple registers).
H: What Lithium chemistry in phones' (now) 3.8V batteries? I'm specifically looking for the discharge tolerances...can communicate with the charge controller on the phone and want to increase the discharge depth from 3.5V to say 3.1-3V for longer battery life. The cell won't sleep until at least 3V or something, damage doesn't occur until 2.7V on the standard lithium batteries used in phones a few years ago, and a replacement is only $12.50 so I want to see what I can do. AI: Modern Lithuim-ion Polymer batteries don't like being discharged to 2.7V, and there's little to gain from going below 3.5V anyway because at that voltage there is almost no capacity left.
H: Which wire gauge for a 250 ampere source? I have two 6V 220Ah DC golf cart batteries that I want to put in series to make a 12V 220Ah output that I will connect to a 3000W power inverter. My problem is finding the proper connecting wires for the series hookup between the batteries. This website says I would need a 250 gauge wire: http://www.cerrowire.com/ampacity-charts but I haven't got a clue where to buy it since, when I went to Home Depot and another local electronics store, their wires were mostly from 10 to 50 amperes which makes sense for the normal 120V AC home wiring systems, but not for my situation. A thought I had was to use two 2 gauge wires in parallel between the two batteries, but I literally have no experience so I need help. AI: Current will be determined by the load, not the battery. If you're planning on operating something which requires 250 A continuously, you could run two 4 AWG wires to share the current. However, notice that your inverter probably does not have massive lugs to handle "0000" (quad-aught) or thicker gauge wire. In fact, it probably uses "0" (aught) gauge, if it's similar to this Energizer EN3000 inverter. The manual for this inverter provides a handy gauge for determining what wire to use for your battery bank: Basically, it comes down to continuous vs intermittent operation. You can use thinner wires if you're not loading them fully or using them at high temperatures. The manual also discusses duty cycle so you can determine what inverter to use for various loads. If your inverter supports it, and you plan on running it at 100% of capacity (3kW), then you might want to use two 4 AWG wires (per terminal) to share the current. (I used this current capacity (ampacity) chart (chassis wiring).) If you really want to find the thickest gauge wire for your application, you'll need to visit an electrical contractor supply store. Edit: The wires connecting the batteries can be the same gauge as those connected to the inverter, as the current will be the same: simulate this circuit – Schematic created using CircuitLab Edit 2: Selecting the correct wire for current carrying capacity is based on a variety of factors: Ambient temperature, wire size, airflow (cooling), duty cycle, conductor type, insulation type, etc. Here is an excerpt from the site I linked to for current capacity: As you might guess, the rated ampacities [current capacities] are just a rule of thumb. In careful engineering the voltage drop, insulation temperature limit, thickness, thermal conductivity, and air convection and temperature should all be taken into account. The Maximum Amps for Power Transmission uses the 700 circular mils per amp rule, which is very very conservative. The Maximum Amps for Chassis Wiring is also a conservative rating, but is meant for wiring in air, and not in a bundle. (Emphasis mine.) The value I selected to recommend 4 AWG is based on the Chassis Wiring (135 A), which is for wires in free air (not in a bundle). Power transmission wiring (the other values provided) assumes wiring in a bundle. Note also that my recommendation is using two 4 AWG (2 * 135 = 270) wires if you can't obtain 0 AWG. The temperature given in the chart is the rated temperature of the wire. Wires with higher temperature ratings may safely carry more current. The 75° you are referring to corresponds to a temperature rating of 75°C (167°F). According to your chart, which I assume to be for wiring in bundles (more conservative), a 4 AWG wire can carry 85 A up to this temperature. Wiring in home attics, for example, can reach these kinds of temperatures, which is why you would want higher temperature-rated wire. If you were to open up the inverter and look at the wiring that the DC input connects to, you will probably find that it is not 250 MCM. Using anything heavier than what the inverter uses means the inverter itself would contain the "weak link in the chain," so to speak. You only would need the very large gauge wire if you were operating at full power for long durations. Your inverter would probably burn out, unless you have an industrial unit designed for such things. I hope this helps clarify a bit more.
H: Are Peltier elements polarized? I'm looking to add a Peltier element to one of my projects. However, I want to sometimes heat and sometimes cool, depending on the ambient temperatures on both sides of the element. Image from Wikimedia commons. My first thought was to use a H-Bridge and then reverse the power for reversing the flow of heat. However, looking at the diagram above, it looks like it may be polarized. (I have no idea, though.) Conclusion one: since you'd be flipping the power, the P-type would act like a N-type and vice versa. This doesn't seem logical, but I've never taken an electronics class, so it may very well be true. P and N probably are treated differently (chemically). Conclusion two: it is polarized because the P-type always has to be attached to the positive side (and vice versa) so you can't flip it. Is either one right? Can I use an H-Bridge for this? AI: A TEC is polarized in the sense that how it is connected matters. If you want to be able to heat and cool an element then a full Hbridge will work. This will allow you to pump current both directions across a TEC's terminals. If you apply a positive voltage to a TEC in one polarization then side A will get warm and side B will cool. If you then reverse polarity then side A will cool and side B will get warm. So if you just need heat pumped in one direction then a half bridge or even direct connection will be fine. Edit: Notice if you apply a positive voltage from one electrical connection to the other you are starting at "P". If you reverse connections you are still starting from "P" but now hot and cold will be flipped.
H: Upgrading a treadmill controller / console So I have an older model treadmill that I would like to upgrade if possible. The newer treadmills roll to a stop very elegantly, whereas this older model just switches off. Also the console is old, worn out etc. It's a very common brand, EKS Fitness. Under the hood, I see a motor for speed, and a motor for tilt, and a controller, and then of course the console. I've asked and been told that the console and controller cannot be replaced. But I'm skeptical. Surely the Controller and Console together represent 99% of the electronics? The two motors are almost certainly the same across many different models. I don't unfortunately have a whole lot of other parts to try, but if I could simply rewire the motor connections for a new controller/console, I would stand a good chance of success.... no? What am I missing? AI: Background: "Long ago" [it feels] I designed the "downstairs" electronics for several different items of exercise equipment which interfaced to consoles and to various sensors. I've done some troubleshooting on treadmills. What you want to do is potentially possible but the factors identified by @knowhow are relevant and some may make life harder than others. It is reasonably likely but not certain that a manufacturer will use similar motor types between models, but also entirely feasible that they may change suddenly. Many treadmills use DC motors rated in the "few horsepower" range. There is a reasonable chance that a new controller will drive the motor if both use a DC motor. But, as knowhow notes, if a sensor type has been changed or if a different sense current is fed to a device that is current dependent or ... then there could be trouble. If you know enough about how such things work you could add some features to your existing controller. eg instead of hitting "turn off" you could trip a circuit which feeds progressive "slow down" signals to the user speed control inputs to achieve smooth slow down to stop. This could be done entirely at the user interface level and not need much knowledge of internal workings. If functionality is Ok but the console is physically 'deprecated' you could build a gee-whizz looking console and feed signals to the existing board. As passerby notes - the control of a motor may be easy enough and being able to "talk to" the controller board may be all that is needed. His two references are worth looking at. They are: Treadmill motor to drill drive conversion and similar on you tube
H: Eagle Freeware - definition of 'signal layers' According the the Cadsoft website, Eagle Freeware supports '2 signal layers'. Am I right that Cadsoft includes power and ground layers in their definition of 'signal layers'? In other words, am I right that Eagle Freeware doesn't support 2 signal layers, a ground layer and a power layer? AI: The freeware version allows two layers, but what you use them for is up to you. Obviously with only two layers, you have to mix power, ground and signal to some extent within those layers. For projects requiring more than two layers, where having separate power and ground layers is applicable, you will want to consider paid versions instead.
H: Does a logic-level P-MOSFET need a gate resistor? I'm quintuple checking something I'm about to order from a PCB house, so bear with me. I'm using a P-channel MOSFET to power on/off a GPS chip. The chip's grounding is complicated, so I prefer to control it on the high side. Here's the circuit: With the wire going to the bottom being the supply to the GPS chip, and GPS_EN going to an ATmega2560. Can someone confirm that this is wired correctly, and that the gate resistor R22 is needed? I'm pulling the gate to the source voltage to default the switch to closed, and then driving it down throw GPS_EN to enable it. I'm going to use something like this MOSFET which has a Vgs of 1V. The ATmega runs off the same 3.3V, so there are no voltage differences there. EDIT: The GPS is expected to have a draw of about 75mA. AI: The gate resistor is normally used with power MOSFETs to limit the magnitude of the current spike that can occur when a driver switches fast and has to charge or discharge a large gate capacitance. With a super fast driver with high drive capability the current spikes to directly drive a large gate capacitance can be measured in the Amps. A small resistor, such as the 100 ohm resistor that you show, can nicely reduce these current spikes to a reasonable level. An a reasonable controlled level current surge is not going to cause coupling and upset to nearby circuits. Another reason to use the series resistor in the gate circuit is to keep the current surge within the max drive spec for a whimpy driver circuit such as a micro controller GPIO pin. The MOSFET that you selected has quite low input capacitance (~75 pF) such that the MCU pin can easily drive the gate without the resistor to limit the current. You mention using this part because it has a "Vgs" of one volt. Do not fool yourself with this spec. A Vgs at 1V only allows a typical drain current of 250uA to flow. You didn't specify the current requirement for the GPS receiver so limited comment can be offered. However you should study the data sheet carefully. Using the graph copied and shown below you will note that with the Vgs of 3.3V as offered from your microcontroller drive pin places the operating mode somewhere between the lower two curves on the chart. You can reasonably expect the available drain current to be be less than 300mA and to be largely limited by the drain-source resistance of the part. Unless your GPS unit runs on very low current (few mA) it is likely that the voltage level seen at the GPS unit supply pin will be less than the 3.3V that you would like to see.
H: Aren't non-polarized plugs a little dangerous? Why still use them? Sorry if there is an answer for this kind of question somewhere. I can't find it anywhere. Take a look at obvious toaster example: If you live in Europe and have non-polarized plug and a toaster (a bad one without double-pole turn off) - you have a 50% chance of touching the hot wire (via the heating elements or even bread if you're unlucky) if you plug it in the wrong way, so that the off switch turns off just the neutral wire. You will be shocked if you somehow connect your body to ground (through another device for example). I understand that these conditions are not very likely to happen, but why do many countries still not even think about doing away with non-polarized sockets and adopt polarized? I'm sure even in Europe the light-bulb sockets are wired to be neutral-shell polarized, so why not wall sockets? Sure it is not a cheap thing to do, but 10 years later it will be even more expensive since more stuff will be produced without a polarized plug. AI: A couple of things. A very small point, the UK is in Europe, and AFAICR we have had 3-pin plugs, 3-wire cables, since the late 50's. My house was rewired in the 80's, and we have Earth Leakage Circuit Breakers (ELCB) on every circuit. So even if I stuck a fork into my toaster while grabbing a copper water pipe, I'd expect it to trip (I am not willing to back this up with evidence :-) When I have visited continental Europe, I am pretty sure that I've seen the same ELCB technology in use. I suggest that is even more effective than having an Earth connection; after all, if I touched the correct bit of wire in your toaster with my fork, without touching anything else, the Earth connection via a plug would do me no good. Further, unless the device had a metal case connected to Earth, I don't think I am much more likely to touch both Earth and live than just live alone. I imagine the cost of rewiring all of the houses in Europe which have two-wire cabling t have three-wire would be very large. However, upgrading the distribution panel with ELCB is pretty simple (a drop in replacement in some cases for an old fashioned fused unit), and could be caused to happen more easily when electricity metres need replacing.
H: Which software is used for producing those schematic sketch? It's a while that in the internet I'm finding schematics with hand-made pencil drawing style. I've tried to search a little bit for the software used to make them but I didn't find it. Any advice? Here some examples: AI: Those look like they were actually drawn with either a schematic stencil or very carefully by hand. "By hand" is something people used to do before software. With practice, the art can be learned, but that's time consuming and lame. Just make a schematic in Visio and then apply a sketch filter with Photoshop.
H: Problem in creating one-shot impulse circuit I am a student in university and I have a problem to be solved. I have to design a circuit that gives just one pulse (not periodic signal) when the 220V source is active. I will give 220V to the circuit and out will be one-second high. And I want to take feedback just one time even if source is connected. I tried 555 timer with one-shot impulse but it didn't work for me. Also if it is possible output must be 8-12 volt. Thank you very much. Here is my circuit AI: The following should work for you, and the LTspice schematic is here if you want to play with the circuit.
H: STM32F103 simple countdown timer (with no interrupts) I'm trying to create a simple countdown timer (without using interrupts) - to use to check for timeout while waiting for an external event to occur. Ideally, I'd like to preload a timer counter with a specific value and have it count down and stop once it gets to zero - so that I can poll for a zero counter value in my while loop. I'm struggling to get the timer to run, and none of the examples I can see in the reference manual is this simple. Can anyone point me towards a simple example of a one shot countdown timer implementation? AI: After some investigation, it seems like I can't have a down counter which stops at zero without using interrupts (as far as I can discover). It also seems that down counters are not possible unless I'm running in centre-aligned or encoder mode. As a result, I've gone for the following implementation, which uses an up-counter. It seems to work fine, although I'm slightly nervous that the counter has the slight possibility to automatically reload before my while loop has seen that it has gone over the threshold - but as long as my check value is well below the automatic reload value I should be fine. There may be better ways to do this. /* Set timer prescaler (0 = no divide - clk = 8MHz) */ TIM2->PSC = 0; /* Set reload register well above the longest time that we're interested in */ TIM2->ARR = 0xFFFF; /* Enable the timer */ TIM2->CR1 |= TIM_CR1_CEN; /* Wait for 1ms */ TIM2->CNT = 0u; while (TIM2->CNT < 8000u) {}
H: GPS Antenna Selector: How to electronically select one antenna from multiple available antennas? I want to connect multiple GPS antennas to a single GPS module and select one of them based on some logic. Basically, I want the module to receive signals from only the selected antenna. Is there an inexpensive/generic/common IC that can act as a Demultiplexer for GPS RF signal (without distorting it too much)? AI: I believe the component you are looking for is called an RF switch. They are used for example for switching the TX path of bluetooth and wifi chips to a single antenna. RF switches are quite inexpensive (relative term, I know), and come for many purposes. The terminology for RF switch classifications is similar to regular switches, i.e. SPDT means single-pole-dual-throw, i.e. a single common signal can be routed to two locations. Here's a relatively generic one which works over a relatively large frequency band (incl. GPS) http://fi.mouser.com/ProductDetail/Skyworks-Solutions-Inc/SKY13270-92LF/?qs=sGAEpiMZZMtsfndvJ9ArQ1GAoWUJ3yIM3lKzNTG0W6Y%3d Datasheet: http://www.mouser.com/ds/2/472/200128G-23362.pdf Please note that RF switches are nowadays often minuscule components and can be difficult to solder manually for the unexperienced. Here's a more generic article about RF switches by digikey: http://www.digikey.com/en/articles/techzone/2012/aug/rf-switches-simplify-multi-antenna-systems
H: Why doesn't my opamp relaxation oscillator oscillate? I have designed a relaxation oscillator with an opamp. It is supposed to oscillated at 50Hz, but it doesn't. I haven't built a physical circuit, I'm trying to simulate it in CircuitLab. I calculated the oscillation frequency with the circuit element values in the schematic as $$ f = \left( T_c + T_d \right)^{-1} = 50.17Hz. $$ Where, \$T_c\$ and \$T_d\$ are charging and discharging times of the capacitor respectively; $$ T_c = RC \ln \left( \dfrac{(+12V) - \dfrac{R_2}{R_1 + R_2} (-12V)}{(+12V) - \dfrac{R_2}{R_1 + R_2} (+12V)} \right) = 9.97ms, \\ T_d = RC \ln \left( \dfrac{\dfrac{R_2}{R_1 + R_2} (+12V) - (-12V)}{\dfrac{R_2}{R_1 + R_2} (-12V) - (-12V)} \right) = 9.97ms. $$ What am I doing wrong here? simulate this circuit – Schematic created using CircuitLab Without the \$R_i\$ resistor: With the \$R_i\$ resistor: AI: Simulated oscillators usually don't start on their own, try setting an initial condition to break the feedback loop during the bias point calculation. I can do this with the Pulsonix (SIMetrix) SPICE simulator by adding an initial condition with a value of zero, you should be able to do something similar with the simulator you are using - see the documentation.
H: Measuring current around 20 amps I have an electrolyzer that works with car battery (12 volts DC) and draws current around 15 amps. I have a digital multimeter that can measure 10 amps at most. So how can I measure the current? Maybe I should use a resistance to measure its voltage but I can't think of a resistance that would tolerate 15 amps. AI: I'd get a precision 0.01 ohm power resistor and measure the voltage across that. At 20 amps you'd only have to dissipate 4 watts at the resistor. Alternately, you could find yourself a Hall Effect sensor. The resistor is simpler though. Power resistors usually look like this: Or this:
H: Image sensor resolution I am not sure how to ask this question. Hopefully, I will edit it to make better sense as we go. I have an image with 10 μm wide black and white stripes and I project it onto the imager, with 1000 pixels, 10 μm square each. Do I have to do image magnification in order to achieve 10 μm resolution. That is, being able to resolve black and white stripes? My speculation: I fill like if the line centers will be 'in phase' with the pixel centers, I will be able to resolve them. However, if the line centers will fall between the pixels, each line will take up two pixels, meaning that I will not resolve them. I can not change the size of the pixels, but I can change the magnification of the image/stripes. Do I follow the Nyquist principle? In this case, if I understand right, the lines should be at least twice wider than the pixel's width. So, to make it happen, I must magnify the image 2x. Do I understand it right? AI: Essentially yes, you do have a very broad understanding. But a couple of notes: Nyquist needs a little more than 2X, so call it 2X + or perhaps 2.1X or 2.2X. However, you still will see large moire fringes (aliased patterns between the pitch of the object to the pitch in the image plane). How do you get the right lens? Well, magnification will be dependant upon Object distance, Image distance and the focal length of the lens. You say that you "... have an image with 10 μm wide black and white stripes ..." which I assume means that you already have a lens in front of your imager, so what is needed in this case is that the magnification has to increase by 2X. It is likely that you are dealing with a optical system that has de-magnification (objects are likely to be larger than 10 um - magnification is less than 1). But the 2X (or 2.2X) can be applied. In general this means that you will change the focal length of your lense.
H: How to get 12V from 9.5-13.9V source I have couple of lead acid battery reserved power supplies with output voltage from 9.5 (battery discharge cutoff) up to 13.9 (while charging the battery), intended for use with CCTV cameras, but current cameras and DVR allow only 12V+-10% and can't tolerate 13.9V. What type of DC-DC converter can work with input both lower and higher than output? Each camera draws up to 0.7A and, there are 8 of them connected to first PSU + DVR drawing 4A on second PSU. AI: There are lots of possibilities: A buck-boost (which inverts the polarity unless it's of the 4 switch type), a SEPIC, a CUK (also inverting) and many transformer coupled topologies. For your purposes I would look at the 4 switch type of buck-boost- You can go to any of the major power semi vendors' websites and see if they have an evaluation board that is similar to what you need. TI, Linear Tech, Maxim, ON Semi and Fairchild all should have products available that are close to what you want. Here's a board from TI that is very close to what you want, a little modification should get you the 4A+ output that you need: http://www.ti.com/lit/ug/snva614b/snva614b.pdf You might be able to find something commercially available as well, but if that's the purpose maybe the question is off-topic for this forum.
H: Can a high voltage difference force a MOSFET open? I'm going to assume the obvious answer is no, but I figure I should check with the experts before I go off and waste a bunch of my time. Say you have several MOSFETS in a row controlling the flow through different components like solenoids. If they all share a ground line, and the solenoids all share the same supply line, could too high of a voltage force the MOSFET open? simulate this circuit – Schematic created using CircuitLab AI: could too high of a voltage force the MOSFET open? If by "open" you mean "conducts current", yes. If the source-drain voltage is in excess of a maximum specified in the datasheet, the MOSFET will experience avalanche breakdown. If the current through the MOSFET is not somehow limited, this will result in failure of the device by overheating. For an example, I've picked the very common 2N7000: This particular MOSFET is not rated for avalanche operation, so you do so at your own peril. However, some power MOSFETs, especially those intended for motor drive applications, will specify a maximum energy the device can tolerate in avalanche breakdown conditions. This is especially relevant in your situation, since you say that you are switching solenoids, which are very inductive. When you turn the MOSFET on, current builds in the solenoid until it is limited by resistance, and a strong magnetic field exists. When the MOSFET is switched off, the current must continue flowing somewhere, and the energy stored in the magnetic field must go somewhere else. Since you have not provided any other possible path for the current, when you switch one of your MOSFETs off, the inductive kick of the solenoid will drive the MOSFETs into avalanche breakdown, and quite likely damaging the MOSFETs, or any other components connected to the power supply. The usual solution in this situation is a "flyback diode". I know this has been covered many times on this site, and someone will suggest a better reference, but in addition to searching for "flyback diode" generally, check out Questions About Inductors.
H: MOSFET resistance I'm trying to understand how MOSFET resistances work, but I'm seeing a lot of things that don't always fit together (probably due to my lack of understanding). Specifically the amplifier configurations (CS, CG, CD). Is this correct: Looking into the gate, resistance is infinity. Looking into the source, resistance is r0 Looking into the drain, resistance is r_ds = 1/g_m Is this view flawed somehow? Thanks! Here is an example from Sedra Smith (which is a bit more involved but really, I have no idea what's going on here). For the purpose of determining the close-loop gain of this amplifier with feedback, the A circuit is shown: I understand why r02 is in parallel with Rf, but what is the 1/gm2 resistor doing there? Why is the first transistor a current source and a resistor, and the second a pair of resistors? Edit: on second thought, I don't understand also why the first r0 is at the drain of the first transistor, and the second r0 is at the source of the second transistor. AI: Looking into the drain, the small-signal resistance is $$r_{id} = r_o = \frac{\lambda^{-1}+V_{DS}}{I_D}$$ if the source is at AC common (common-source configuration). If the AC resistance from source to common is \$R_{ts} \ne 0\$, the small-signal resistance looking into the drain is $$r_{id} = r_o \left(1 + \frac{R_{ts}}{r_s} \right) + R_{ts}$$ where $$r_s = \frac{1}{g_m}$$ Looking into the source, the small-signal resistance is $$r_{is} = r_s$$ The above assumes the body is connected to the source. I understand why r02 is in parallel with Rf, but what is the 1/gm2 resistor doing there? The lower right circuit is drawn oddly and further, seems to mix AC and DC sources which is an error. If I were teaching this circuit, I would draw the AC circuit, with Q1 and Q2 replaced by their small-signal T-models, as follows simulate this circuit – Schematic created using CircuitLab Now, is it clear why \$r_{s2} = \frac{1}{g_{m2}}\$ is there? Edit: on second thought, I don't understand also why the first r0 is at the drain of the first transistor, and the second r0 is at the source of the second transistor. \$r_o\$ connects to the drain and source. Since, for Q1, the source is grounded, \$r_{o1}\$ connects from D1 to ground. Since, for Q2, the drain is AC grounded, \$r_{o2}\$ connects from S2 to AC ground.
H: How create a low-voltage boost (~4v to 5v) I am looking for a circuit that will go from a Li battery that has a voltage between 3.6v and 4.3v up to 5v - and I have seen similar questions such as this and this, but those just recommend alternatives instead of providing a solution to what seems like a common problem. More information: I am looking to keep a Raspberry Pi alive, which needs a stable 5v (not less!) at 1A minimum in order to keep it fully functional (yes it can run at 3.3v, however it loses Ethernet functionality, which makes it not useful for my purposes anymore) AI: There are any number of commercial boost regulator ICs that will meet your needs. Note that the peak switch current for a boost converter is greater than the load current, so you will want one that is rated for > 1A. VOUT/VINMIN*1.2 is a good start. Anything greater than 2A would probably work for you. Check out the LT3436 or LT3580, to give just two examples.
H: Beating (possibly) caused by different sampling frequencies? So I plugged a signal generator at 0.06 V peak to peak and frequency 1 Hz into my apparatus. The signal is processed in two steps: 1) signal fed to a first computer, which outputs stuff at sampling freqency 10 Hz, 2) the output from the first computer is fed to a second machine (sampling frequency 16 Hz), and then plotted. This is what I see: The peak to peak voltage is twice the one I selected but that might be caused by a load resistance different from the one that the signal generator is expecting (but if you can explain this quantitatively I would be extremely grateful). Why is there beating? Could it be due to the difference in sampling frequencies? Thanks. AI: This is probably just a manifestation of your actual sample rate and your input wave not sharing a common divisor. Here's a quick Matlab example, with a sample rate of 8Hz and a frequency of 0.92. >> time=[0:50]/8; >> f=0.92; >> out=sin(2*pi*f*time); >> plot(time, out); It's not as dramatic as your example, but given time, and your funny resampling, I'm sure I can come up with something as dramatic. Meeting Nyquist sampling criteria does not mean that simply plotting the data will yield perfect reconstruction-- it just means that it's possible to mathematically reconstruct the original signal. Watch what happens when I up my input signal!! >> f=2.3; >> out=sin(2*pi*f*time); >> plot(time, out,'-*') At 2.3 Hertz, I'm sampling (8Hz) at more than three times the input freq, but the result still looks messy. Resampling will only make things messier (but that's a hair much to spend time to simulate here, as Matlab resampling commands will upsample by interpolation, so doing it the way you're doing it is more complex)
H: Why is the LED of my Illuminated rocker switch always on? I recently purchased a rocker power switch from RadioShack (103-R13-135B-02R-EV) and it works great. However, even when the switch is off, the small red LED stays on. Is this normal? Won't this unnecessarily drain my 9V battery? AI: I believe that this is the internal schematic of your switch: Based on that, if you want to wire the switch so that the light turns on and off with the switch, you would wire it as follows: Pin 1: +Voltage power in Pin 2: Switched power for your circuit Pin 3: 0 volts I'm guessing that you have pin 1 and 2 reversed, which would cause the LED to be always on. And to answer your question, yes, if the LED is on, you are draining your battery. :)
H: Large Wires on a Perfboard I am working on a project on Perfboard which involves controlling a motor that draws about 10A, and I am using 18AWG stranded wire to connect the MOSFET on board to the battery and motor which are off board. The problem is that the wires are too large to fit in the holes, and if I cut some of the strands short so that the wire fits in the hole, the other strands work their way out of the insulation and are in danger touch nearby components. What I'm doing now is laying the wire flat across a few holes and soldering it surface mount, which seems to work better, but I'm worried about pulling the copper right off the board. The oval shapes on the boards above also seem to be useful for this purpose as there's more copper to attach to. What's the best way to solder power wires to a piece of protoboard? How do you handle strain relief? AI: Insert and solder pins, tin the wire ends, and wrap it round the pins a couple of time before soldering. For strain relief, put the wires against the board and secure them with pieces of solid wire inserted in a couple of adjacent holes and twisted a few times on the other side, to keep the stranded wires in position.
H: Simple circuit to fade in / fade out LEDs randomly I have a group of 8 blue LEDs (Vf 3.2V @ 20 mA) currently powered up in parallel by a resistors (one for LED) and a 4.5V power source (3xAA batteries). I've been playing with fade in/out effects and 555 circuits. What would be a simple circuit to randomly (or-pseudo randomly) fade in and out them independently? I mean, when some LEDs are increasing in intensity others could be decreasing following no specific pattern (the fade in/out cycle time could be different as well). Ideally, I'd like the fade-out effect not to be complete. In other words, more than a fade in/out effect what I'm looking for is a circuit that can slightly affect the light intensity if each LED randomly (e.g. fluctuating the current from 3mA to 20mA). I would appreciate any hints or pointers. A detailed explanation or circuit diagram is now necessary (but it would be awesome!). AI: For fading you'll probably want varying PWM. Generally you'd want to do this with microcontrollers. The Arduino is a very simple platform that is cheap and easy to use. I don't recommend it if you really want to learn how microcontrollers work, but if you just need something to get work done, it's definitely your best option. It has built-in PWM functionality, as well as MATH functions (like a random-number generator, which can be used to determine which LED to control), and the Uno (probably the most common model) has 6 PWM pins. You could also go for the Mega if you want more PWM outputs. You could also do this with multiple 555 timers (one per LED), but this seems like a lot more work than simply using an Arduino. However, if this interests you, here's the schematic: This site explains it a bit more: http://www.555-timer-circuits.com/up-down-fading-led.html You can change the capacitor value to adjust the fading speed. By offsetting this speed slightly, it will appear that the LEDs are fading in and out randomly.
H: What effect does Gamma radiation sterilisation have on embedded electronics devices We just received back 3 pieces of equipment that may have been inadvertently put through a Gamma radiation sterilisation process. The equipment is a low-power embedded device consisting of a 16 bit microcontroller with peripheral FLASH memory and SRAM backed up by a lithium battery. The equipment appears completely dead. What possible effects would Gamma radiation have on embedded devices? FLASH memory corruption? Circuit degradation? Component failure? More info: The equipment is three nephelometer instruments that we make, which use a quite expensive COTS logger as the brains. The instruments were sent back to us from overseas along with some sediment samples. Customs / quarantine decided that they wanted to sterilise the sediment samples, and I think they must have put the instruments through as well. AI: The prognosis is probably not too flash [groan]. Depending on energy of radiation, device death ranges from possible to essentially certain. You can potentially expect all the effects you described. Even if apparently still operating you may get increased leakage currents and 'general unhappiness'. What was the gamma source? what energy?, what distance? Why? Radiation Damage in Electronic Memory Devices PDF HTML Says: ... Obtained results show that gamma radiation causes decrease in threshold voltage, being proportional to the absorbed dose of radiation. EPROM & EEPROM: ... Gamma radiation causes generation of electron-hole pairs in SiO2 insulator of the gate. The number of generated pairs is directly proportional to the energy deposited in material, depending on the total absorbed dose of radiation [8, 14]. ... Conclusion ... Based on analysis of data gathered from performed experiments, the exposure of semiconductor memories to gamma radiation causes three effects: holes being captured in trapping sites of an oxide, injection of holes from oxide into FG, and emission of electrons through FG-oxide interface. The generation of electron-hole pairs leads to trapping of positive-charged carriers (holes) in insulator, causing negative shift in characteristics. Namely, positive-charged carriers induced by gamma radiation require the increase of negative gate voltage to compensate the positive charge. It means that gamma radiation causes decrease in threshold voltage, being proportional to the absorbed dose of radiation. NASA - Chips in space
H: current leakage in a laptop Current leakage is apparently an issue with some aluminum Macbooks and other laptops. https://superuser.com/questions/462244/electric-shock-mild-vibrating-sensation-on-macbook-pro-when-charging Some of the threads related to this topic advise definitely not ignoring what seems like a minor nuisance: https://superuser.com/a/421029/108081 Putting aside the risk of the typical problem, could this be a greater danger with a voltage spike, like during a lightning storm? Isn't there a surge protector inside the Magsafe adapter? I've noticed the "hum" effect for a long time but only recently realized it was current leakage. Regardless of whether or not the Magsafe's ground pin is defective, the 3-pronged cable is not being used, or the outlet isn't properly grounded, if the "hum" is still felt, doesn't this mean there is a defect inside the laptop? If there were no defect, would proper grounding be unnecessary, as far as risk goes where the Magsafe is the current source? AI: The problem is that in the absence of any other factors, the secondary of a power transformer will tend to float at half the line voltage, because of the leakage current through the primary-to-secondary capacitance. It really doesn't matter a whole lot whether it's a line-frequency transformer or a high-speed switching transformer, although the capacitance in the latter should be somewhat less. Normally, this leakage current is conducted directly to ground via the third wire in the line cord, but in a 2-wire cord, this is not possible. Since the impedance of a capacitor drops with increasing frequency, yes, it could be a concern that the leakage current could be much higher in the presence of fast transients on the primary side. A surge protector inside the power supply will limit the differential voltage across the transformer primary, but it won't have any effect on common-mode voltage offsets. A common-mode choke will have some effect on the latter, but probably not enough to be considered a safety feature; it's mainly there to prevent switching noise from inside the power supply getting back out onto the line. An electrostatic shield inside the transformer that's tied to the Neutral side of the line input could reduce the leakage considerably, but with nonpolarized plugs (and the possibility that the wall socket is miswired) there'd be a 50-50 chance that it would make the problem worse rather than better.
H: Logic design to circuit -- how is it generally done? So I've designed this CPU with purely logical components (basic logic gates), now how would I go about converting it into a circuit? I know that logical gates can be implemented using transistors, but it seems pretty cumbersome to do the equivalent conversion for every gate (there are a couple hundred). In addition to this, adding the required resistors and other passive components due to physical constraints seems a bit cumbersome. Is there a secret to how this is generally done? AI: For practical purposes of getting it working on your desk, you probably want an FPGA. In order to make sure it works, you should simulate it first with a program like Modelsim. This enables you to iron out the bugs before buying any hardware.
H: How do I active the relays on this relay bank? I purchased 2 "Mechanical Relay Board, 8 gang, 10A 220VAC, 24VAC 30ma coil" relay banks that I am trying to control from a raspberry Pi. To do this I've built out 2 595 shift registers and I'm using an external power source, not the power from the pi. Testing on LED's everything is ok but for the life of me I cannot get the relays to activate. I figured that maybe the input was too small, not enough mA to activate it. I'm supplying 12v DV from a 12v 1A transformer to power the relay bank and I tried to activate the relays using 5v from USB, 5v from 9v battery running through a 5v regulator, 9v directly from battery and also 12v directly (from the transformer used to power it). Nothing. I notice that when I test the voltage across the input power terminals, it reads very low (0.10v) but when I unplug the adapter from the relay bank, it reads 12v. So when the power is plugged in to the relay bank, I'm imagining that the bank is causing a fault in the transformer? I noticed that when I had the 9v battery plugged in to the bank as the power source it got really hot. What am I doing wrong? I changed the jumpers to both positions and tried the same tests on both setups. Datasheet AI: I notice that when I test the voltage across the input power terminals, it reads very low (0.10v) but when I unplug the adapter from the relay bank, it reads 12v. The relay board doesn't have 'power input terminals'. If you mean the two common terminals, they are both connected to the relays' common connection. So if you connect a supply across them, then you're shorting it out and measuring the drop along a pcb trace. The datasheet says you can run at 12V DC by setting the jumper (presumably removes a half wave rectifier from the circuit), connecting +12V to the common, then connecting the terminal for the relay you wish to switch to 0V. You don't seem to have tried this. If you want to run it from arduino level, use a FET which turns below 5V and connect the FET between the relay terminal and ground while supplying 12V DC to the common terminal.
H: How to check that blast furnace is full? I'm working on blast furnace oven, I need to check if it's full with materials in levels one , two and three as shown :- The main problems: It's very hot as you see in the picture (200°C to 900°C), thus I need to protect my sensors from this heat and also I need them not to be affected with the emissions. Can I use IR sensors, camera or ultra sonic to solve this problem? Update 1:- The oven is used to melt down lime stones. I'm afraid that the heat could damage the sensors or that dropped stones may hit them. AI: A radar sensor might work for you. This is a commercial 10GHz sensor. Since the temperature at the top is not too bad for many dielectrics, you could design and construct a radome with purge air that would keep the worst of the fumes away from the sensor.
H: electrical fridge running on car battery I'll be out of grid for some days and I'm planning to have a small electrical fridge with me. It's a 12V 4A small fridge with a cigar lighter plug, but I won't have a car with me, so I think I could connect it to a 12V car battery? My questions are, does my plan make any sense? Also what do I need to know so I can figure out how long the fridge can run before the battery runs out? AI: It does make sense. Those 12V car fridges are usually not very effective though, enough to chill a few beers but will not freeze raw meat. How long it will work depends on the battery capacity and enviroment. For example if you have a regular lead-acid car battery with 75Ah capacity then it should be enough for roughly 75Ah/4A=18 hours of operation but with good insulation and well balanced thermoregulator it will be longer as it probably will not work all the time but will chill to some set temperature and then it will go off until it inner temperature rises above a set limit, so it is very hard to tell how long it will work in your case. NB. transporting lead-acid batteries is risky you need to make sure that you dont spill acid from it.
H: Are SR latches suitable to be used as RAM? I've just recently started to become interested in electrical engineering after a couple years of programming and software development. I've built a 4 bit ALU using a logic gate simulator and now I want to store a previous answer from the ALU into some memory cells. I've been reading about flip flops and latches, and I've built an SR latch and it seems that I can easily store my answer in 4 of these latches, my question is, is this an efficient way of storing memory (considering the fact that I have no formal training or much understanding so I can't delve into optimization techniques just yet) or should I implement a clock triggered SR flip flop with clock input? How is the data read from the latch on demand? AI: "Clock triggered SR flip flop with clock input" is basically a "D type flip flop". This is a very standard way of doing local storage within digital logic. You don't explicitly read from it, it continuously outputs the stored value. If you have an array of them you need selectors of some sort (multiplexors, tri-state outputs, crossbar switches) to choose which one to use as the input of the next logic stage. Edit: there's a question of scale. Flops are suitable for storing small amounts of data local to some logic. The next step up is SRAM: http://tams-www.informatik.uni-hamburg.de/applets/sram/ ; which is practical for a few hundred kilobytes. Beyond that DRAM, which uses a single transistor's gate capacitance to store each bit of data in. DRAM can be made extremely dense, but for manufacturing reasons usually has to be on a separate chip. See also How does random memory access of RAM work?
H: What does it mean for a transistor to be biased beyond cutoff? I understand what it means for the transistor to be biased for Class A, AB, and B with regards to the location of their DC operating points (\$I_{C_Q}\$ and \$V_{CE_Q}\$ are found by finding the point of intersection of the DC load line and the transistor characteristic curve). For the Class C amplifier, the sources I've read say that the transistor must be biased beyond cutoff, but then I don't understand how this can be since there is no characteristic curve beyond the cutoff: Also, does this mean this type of biasing forces a current (and with an opposite direction with respect to normal operation) through the transistor? AI: First, remember that the load line drawing solves a particular set of equations. Where the lines cross gives the the operating point for that combination of power supply, load resistor, and transistor base current. Second, it's correct that there is no characteristic curve for the BJT that goes through the region you circled. The reason is conservation of energy. If the BJT operated in that region, it would mean that the BJT was delivering energy to the circuit, rather than taking energy provided by the power supply and turning it into heat. Since a BJT doesn't contain a reserve of energy that can be released in steady-state conditions, it simply can't operate in that region. There is, however, probably a small region right near the origin where the transistor characteristic curves do pass through quadrant IV of the graph. Consider this circuit: simulate this circuit – Schematic created using CircuitLab This is essentially taking the resistor/power-supply load line and moving it down for the case where Vcc goes to 0. In this case, the base-collector junction will be forward biased and some power from the base bias supply will be delivered to the load resistor. And the load line will give a solution in quadrant IV, but very close to the origin. If we were talking about a MOSFET instead of a BJT, even this solution would not be possible, since there's no way for current to transfer from the gate side to the drain side of the FET.
H: Russian variable resistor datasheet Can you please help me find datasheet for this 2.2 ohm variable resistor. I want to know if it can tolerate 20 amps. If you recognize the logo that would help a lot. AI: Here's the datasheet for the family. Here's how you read it: Variables resistors wire frameless trimmers (versions axis «Б», «В», «Д», «Е») and regulation (versions axis «А», «Г») actuators with the circular motion of the mobile system for surface mounting, intended for use in DC and AC frequency to 1000 Hz. The part on the picture above is rated for 25W nominal/long term power.
H: Powering Raspberry-pi using ESC Currently I am building a quadcopter with Raspberry Pi and I have four 20A Tunigy Electronic Speed Controller (ESC). I was wondering about could I use ESC's red wire to power my Raspberry-Pi? (tying 4 red wire out of ESC could fill the current requirement?). AI: It seems that you should be very careful using the BEC (Battery Eliminator Circuit) on your ESC. The hobbyking link says it outputs 5.5V while here it seems that the raspberry is very very picky at input voltages. In my opinion 5.5V would be no harm at all but the specs were written by an engineer and it's better to trust them (on average). You can either search for better specs (these FAQs are not a specification) hoping for the best, try your luck hooking the BEC and the raspi together or search for a LDO regulator such as this, that seems to be in the 450mV range of dropout voltage sourcing 1A@5V. Keep in mind that 500mV is a very, very low headroom to put a regulator in between, but searching makes miracles sometimes. I'd go for the last option, you want a nice 'n stable supply for the raspi anyway, and with 1$ you get the regulator and the necessary capacitors.
H: How does a decoupling capacitor handle a spike (increase) in the voltage from the power supply? From what I understand, when there is a drop in the voltage the current is drawn out of the capacitor and so the balance is maintained. But when there is an over voltage from the power supply or when the load draws more current how does the decoupling capacitor balance it. The capacitor is already fully charged. So when the load draws more current, doesn't that current get drawn from the capacitor/power supply? AI: You can think of a decoupling capacitor as a toilet tank. If you just had the water pipe feeding the toilet you would never have enough volume of water to flush the toilet. The is a local storage area for extra water. When you flush it, it gives a large supply of water at once and allows a flush to happen. Then when the toilet is done flushing, the tank gets filled back up. It's not quite the same with a capacitor, but similar. When your load needs more current, the capacitor will source some extra current. This causes a slight dip in voltage but not a huge one. If there wasn't a capacitor there, if the load needs a lot of current for a moment, the voltage will droop a lot because the load resistance would have dropped and power supply resistance would consume too much of the voltage. Now for the other case for voltage spikes. A capacitor is never really "full". There's a maximum voltage it can handle, but usually that should be at least 25% higher than the normal operating voltage. Let's say the supply voltage is 5V, the capacitor should be able to handle at least 6.25 volts. That means that when there's a voltage spike coming down the line, the capacitor will absorb some of the extra current caused by the voltage and quench the incoming voltage spike to be much less than it would be otherwise. In this case, you should think of a capacitor like a flexible membrane attached to the side of a water hose. If there's ever a higher pressure transiently in the water hose, the flexible membrane (kind of like a shock absorber) would absorb the excess pressure and allow anything further down the line to not see the excess pressure (voltage) spike because it would have been absorbed by the membrane. Bypass capacitors work in a similar way. Also note that when the load needs more current transiently, it will absorb current from both the capacitor and the power supply. It's just that the transient seen from the power supply will be distributed over time more.
H: How much power/intensity IR Diode should have to be seen from 30-50 meters How much power/intensity IR Diode should have to be seen from 30-50 meters by camera with IR pass filter? Must be a good point in a photo. Should be seen in a day light. Preferably with wide angle. About power supply: 2-4 AA accumulators, should work at least a hour :) Camera: I think 2-3 megapixel should be enough? Maybe more. AI: You may be in luck. On eBay you can find 3-watt IR LEDs quite cheaply, such as 10pcs 3w 850nm infrared IR LED for night vision camera with 20mm Star PCB. Your AA cells must be long-life alkaline or NIMH rechargeables. Two AAs in series should drive a single LED with a 1.5 to 2 ohm limiting resistor for a couple of hours. And the resistor must be a 2-watt or higher. Yes, I know, it's grossly inefficient, but it's simple and ought to work without more electronics experience than I suspect you have. Let's run a few numbers. Let's say your camera field of view is 30 degrees. A single pixel is then 30 / 2048, or .015 degrees. At a range of 50 meters this covers about 13 mm, assuming perfect focus. Sunlight has a maximum intensity of about 1 kw / square meter, so the light reflected from a very white surface at one pixel will have a maximum power of about 0.17 watt total. IR makes up about 1/2 of sunlight's power, and let's say the camera response for a red pixel includes 20% of the visible and 20% of the IR. Then the overall red pixel power will be ~ .07 watt / pixel. Since the data sheet for the sensor which you gave has no data at all about the response in IR, this is purely speculation, but it's the best I can do. Certainly the camera can't see all that far into the IR. If it can, it will accept more IR power, which is bad for you. Now think about the LED. Let's say that the LED has an efficiency of 20% - that is, it puts out .6 watt of IR. This is probably close. The emitter size is clearly less than a pixel, so you can compare it to the reflected sunlight. It will actually be somewhat higher than that if you are within the LEDs 135 degree field of view. And the LED puts out about 9 times more power than the camera ought to see from reflected sunlight, assuming no specular surfaces. Like I say, you may be in luck. If you go with a wider field of view, the pixel size increases, so the sunlight power/pixel increases, but the emitted LED power does not. So, for instance, going to a 60 degree field of view should cut your LED/sunlight ratio to about 2 to 1, which may or may not be adequate for reliable detection. This is all very rough, and some of my assumptions may be off, but the numbers look close enough to warrant spending a few bucks on some LEDs. But don't go promising that you can make it work until you've actually tried it. ETA - I thought I'd mention that the specific IR LED I linked to may not be acceptable. Without knowing the response of the camera, I'd recommend getting an LED as close to visible as possible.