Skip to content

Commit

Permalink
tweaks and minor fixes to docs (#1540)
Browse files Browse the repository at this point in the history
  • Loading branch information
oskooi authored Mar 31, 2021
1 parent 899f6d0 commit 0b460cd
Show file tree
Hide file tree
Showing 12 changed files with 22 additions and 45 deletions.
2 changes: 1 addition & 1 deletion doc/docs/Acknowledgements.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@
Authors
-------

Meep originated as part of graduate research at [MIT](https://en.wikipedia.org/wiki/Massachusetts_Institute_of_Technology) with initial contributions by [Steven G. Johnson](http://math.mit.edu/~stevenj/), [Ardavan Oskooi](http://ab-initio.mit.edu/~oskooi/), [David Roundy](http://physics.oregonstate.edu/~roundyd/), [Mihai Ibanescu](https://www.linkedin.com/in/mihai-ibanescu-2b147825/), and [Peter Bermel](http://web.ics.purdue.edu/~pbermel/). Currently, the Meep project is maintained by [Simpetus](http://www.simpetus.com) and the developer community on [GitHub](https://github.com/NanoComp/meep). [Christopher Hogan](https://github.com/ChristopherHogan) and [M.T. Homer Reid](http://homerreid.dyndns.org/) lead the development of the [Python interface](Python_User_Interface.md), [mode-decomposition feature](Python_Tutorials/Mode_Decomposition.md), and [GDSII import routines](Python_Tutorials/GDSII_Import.md). M.T. Homer Reid and [Alec Hammond](https://github.com/smartalecH/) developed the [adjoint solver](Python_Tutorials/AdjointSolver.md). [Alex Cerjan](http://www.alexcerjan.com/) assisted with adding support for saturable absorption via [multilevel atomic gain media](Materials.md#saturable-gain-and-absorption). Alec Hammond developed the [visualization module](Python_User_Interface.md#data-visualization). [Yidong Chong](http://www1.spms.ntu.edu.sg/~ydchong/bio.html) and Alex Cerjan added support for [gyrotropic media](Materials.md#gyrotropic-media).
Meep originated as part of graduate research at [MIT](https://en.wikipedia.org/wiki/Massachusetts_Institute_of_Technology) with initial contributions by [Steven G. Johnson](http://math.mit.edu/~stevenj/), [Ardavan Oskooi](http://ab-initio.mit.edu/~oskooi/), [David Roundy](http://physics.oregonstate.edu/~roundyd/), [Mihai Ibanescu](https://www.linkedin.com/in/mihai-ibanescu-2b147825/), and [Peter Bermel](http://web.ics.purdue.edu/~pbermel/). Currently, the Meep project is maintained by [Simpetus](http://www.simpetus.com) and the developer community on [GitHub](https://github.com/NanoComp/meep). [Christopher Hogan](https://github.com/ChristopherHogan) and [M.T. Homer Reid](http://homerreid.dyndns.org/) lead the development of the [Python interface](Python_User_Interface.md), [mode-decomposition feature](Python_Tutorials/Mode_Decomposition.md), and [GDSII import routines](Python_Tutorials/GDSII_Import.md). M.T. Homer Reid and [Alec Hammond](https://github.com/smartalecH/) developed the [adjoint solver](Python_Tutorials/Adjoint_Solver.md). [Alex Cerjan](http://www.alexcerjan.com/) assisted with adding support for saturable absorption via [multilevel atomic gain media](Materials.md#saturable-gain-and-absorption). Alec Hammond developed the [visualization module](Python_User_Interface.md#data-visualization). [Yidong Chong](http://www1.spms.ntu.edu.sg/~ydchong/bio.html) and Alex Cerjan added support for [gyrotropic media](Materials.md#gyrotropic-media).

Referencing
-----------
Expand Down
2 changes: 1 addition & 1 deletion doc/docs/Build_From_Source.md
Original file line number Diff line number Diff line change
Expand Up @@ -91,7 +91,7 @@ If you are not the system administrator of your machine, and/or want to install

### Python

If you have Python on your system, then the Meep compilation scripts automatically build and install the `meep` Python module, which works with both the serial and parallel (MPI) versions of Meep. Note: Meep's [visualization module](Python_User_Interface.md#data-visualization) includes animation routines which require [matplotlib](https://matplotlib.org/) version `3.1`+ and the [adjoint solver](Python_Tutorials/AdjointSolver.md) requires [autograd](https://github.com/HIPS/autograd).
If you have Python on your system, then the Meep compilation scripts automatically build and install the `meep` Python module, which works with both the serial and parallel (MPI) versions of Meep. Note: Meep's [visualization module](Python_User_Interface.md#data-visualization) includes animation routines which require [matplotlib](https://matplotlib.org/) version `3.1`+ and the [adjoint solver](Python_Tutorials/Adjoint_Solver.md) requires [autograd](https://github.com/HIPS/autograd).

By default, Meep's Python module is installed for the program `python` on your system. If you want to install using a different Python program, e.g. `python3`, pass `PYTHON=python3` (or similar) to the Meep `configure` script. An Anaconda (`conda`) [package for Meep](Installation.md#conda-packages) is also available on some systems.

Expand Down
6 changes: 1 addition & 5 deletions doc/docs/FAQ.md
Original file line number Diff line number Diff line change
Expand Up @@ -421,10 +421,6 @@ Yes. An official [Python interface](Python_User_Interface.md) was released in [v

At least 8 pixels per wavelength in the lossless dielectric material with the highest index. Resolving the [skin depth of metals](https://en.wikipedia.org/wiki/Skin_effect), which is typically tens of nanometers at optical frequencies, will require a pixel size of comparable dimensions since [subpixel averaging does not apply to dispersive materials](Subpixel_Smoothing.md#what-about-dispersive-materials).

### What is a good rule of thumb for the PML thickness?

Around half the wavelength, typically. (Note that the boundary condition, metallic or periodic, is essentially irrelevant to the operation of the PML.) PML allows inhomogeneous materials like waveguides as long as the materials are only varying in the boundary-*parallel* directions; wave media that are inhomogeneous in the boundary-normal directions (e.g., gratings or other periodic structures, oblique waveguides, etc.) as well as unusual waveguides with backward-wave modes cause PML to break down, in which case one alternative is a thicker non-PML [absorber](Python_User_Interface.md#absorber) as described in [Perfectly Matched Layers](Perfectly_Matched_Layer.md).

### What is Meep's frequency-domain solver and how does it work?

Meep contains a [frequency-domain solver](Python_User_Interface.md#frequency-domain-solver) that directly computes the steady-state fields produced in a geometry in response to a [continuous-wave (CW) source](https://en.wikipedia.org/wiki/Continuous_wave), using an [iterative linear solver](https://en.wikipedia.org/wiki/Iterative_method) instead of time-stepping. This is possible because the FDTD timestep can be used to formulate a frequency-domain problem via an iterative linear solver. The frequency-domain response can often be determined using many fewer timesteps while exploiting the FDTD code almost without modification. For details, see Section 5.3 ("Frequency-domain solver") of [Computer Physics Communications, Vol. 181, pp. 687-702, 2010](http://ab-initio.mit.edu/~oskooi/papers/Oskooi10.pdf).
Expand Down Expand Up @@ -495,4 +491,4 @@ The second approach is based on a full nonlinear simulation of the Raman process

### Does Meep support adjoint-based optimization?

Yes. Meep contains an [adjoint solver](Python_Tutorials/AdjointSolver.md) which can be used for sensitivity analysis and automated design optimization with respect to a grid of $\varepsilon$ values (also known as "density-based" topology optimization). (Of course, you can always use finite differences or similar methods to compute sensitivities for other parameters, as well as derivative-free optimization methods. However, such methods become increasingly impractical for ≳ 10 parameters.)
Yes. Meep contains an [adjoint solver](Python_Tutorials/Adjoint_Solver.md) which can be used for sensitivity analysis and automated design optimization with respect to a grid of $\varepsilon$ values (also known as "density-based" topology optimization). (Of course, you can always use finite differences or similar methods to compute sensitivities for other parameters, as well as derivative-free optimization methods. However, such methods become increasingly impractical for ≳ 10 parameters.)
4 changes: 2 additions & 2 deletions doc/docs/Parallel_Meep.md
Original file line number Diff line number Diff line change
Expand Up @@ -57,7 +57,7 @@ Meep also supports [thread-level parallelism](https://en.wikipedia.org/wiki/Task

### Optimization Studies of Parallel Simulations

When running Meep simulations as part of an optimization study (e.g., via the [adjoint solver](Python_Tutorials/AdjointSolver.md)), in order to keep all processes synchronized *every* process runs the same optimization algorithm on the same optimization variables. The overhead of duplicating the computational cost of the optimization algorithm and storage of the design variables across all processes is negligible compared to those of the Meep simulation.
When running Meep simulations as part of an optimization study (e.g., via the [adjoint solver](Python_Tutorials/Adjoint_Solver.md)), in order to keep all processes synchronized *every* process runs the same optimization algorithm on the same optimization variables. The overhead of duplicating the computational cost of the optimization algorithm and storage of the design variables across all processes is negligible compared to those of the Meep simulation.

For comparison, consider the scenario where the optimization runs on just a single master process. That would mean that during each iteration of the optimization after the Meep simulation has computed the objective function (and its gradient), only the master process uses this information to update the optimization parameters (i.e., the design region). The master process would then need to send the updated design region to the other processes so that they could all begin the next Meep simulation. As a result, additional bookkeeping is required to synchronize the processes.

Expand Down Expand Up @@ -99,7 +99,7 @@ Technical Details

When you run Meep under MPI, the following is a brief description of what is happening behind the scenes. For the most part, you shouldn't need to know this stuff. Just use the same Python/Scheme script file exactly as you would for a uniprocessor simulation.

First, every MPI process executes the Python/Scheme file in parallel. The processes communicate however, to only perform one simulation in sync with one another. In particular, the cell is divided into "chunks", one per process, to roughly equally divide the work and the memory. For additional details, see [Chunks and Symmetry](Chunks_and_Symmetry.md) as well as Section 2.2 ("Grid chunks and owned points") of [Computer Physics Communications, Vol. 181, pp. 687-702, 2010](http://ab-initio.mit.edu/~oskooi/papers/Oskooi10.pdf).
First, every MPI process executes the Python/Scheme file in parallel. The processes communicate however, to only perform one simulation in sync with one another. In particular, the cell is divided into "chunks," one per process, to roughly equally divide the work and the memory. For additional details, see [Chunks and Symmetry](Chunks_and_Symmetry.md) as well as Section 2.2 ("Grid chunks and owned points") of [Computer Physics Communications, Vol. 181, pp. 687-702, 2010](http://ab-initio.mit.edu/~oskooi/papers/Oskooi10.pdf).

When you time-step via Python's `meep.Simulation.run(until=...)` or Scheme's `run-until`, etc., the chunks are time-stepped in parallel, communicating the values of the pixels on their boundaries with one another. In general, any Meep function that performs some collective operation over the whole cell or a large portion thereof is parallelized, including: time-stepping, HDF5 I/O, accumulation of flux spectra, and field integration via `integrate_field_function` (Python) or `integrate-field-function` (Scheme), although the *results* are communicated to all processes.

Expand Down
5 changes: 5 additions & 0 deletions doc/docs/Perfectly_Matched_Layer.md
Original file line number Diff line number Diff line change
Expand Up @@ -64,3 +64,8 @@ For an $E_y$-polarized source and `is_integrated=True`, the wavefronts are plana
</center>

In the future, `is_integrated=True` will be set automatically for sources extending into the PML ([#1049](https://github.com/NanoComp/meep/issues/1049)).

What is a Good Rule of Thumb for the PML thickness?
---------------------------------------------------

Around half the wavelength, typically. (Note that the boundary condition, metallic or periodic, is essentially irrelevant to the operation of the PML.) PML allows inhomogeneous materials like waveguides as long as the materials are only varying in the boundary-*parallel* directions; wave media that are inhomogeneous in the boundary-normal directions (e.g., gratings or other periodic structures, oblique waveguides, etc.) as well as unusual waveguides with backward-wave modes cause PML to break down, in which case one alternative is a thicker non-PML [absorber](Python_User_Interface.md#absorber).
21 changes: 0 additions & 21 deletions doc/docs/Python_Tutorials/AdjointSolver.md

This file was deleted.

4 changes: 2 additions & 2 deletions doc/docs/Subpixel_Smoothing.md
Original file line number Diff line number Diff line change
Expand Up @@ -147,7 +147,7 @@ It is possible that the subpixel averaging may still improve the constant factor
What Happens When Subpixel Smoothing is Disabled?
-------------------------------------------------

When subpixel smoothing is disabled by either (1) setting `eps_averaging=False` in the [`Simulation`](Python_User_Interface.md#the-simulation-class) constructor or (2) using a [material function](Subpixel_Smoothing.md#enabling-averaging-for-material-function) (as is typical in the [adjoint solver](Python_Tutorials/AdjointSolver.md)), each electric field component $(E_x, E_y, E_z)$ in a given voxel is individually assigned a scalar permittivity (for isotropic materials) based on whatever the value of the permittivity is at that position in the [Yee grid](Yee_Lattice.md). This results in [staircasing artifacts](Subpixel_Smoothing.md) due to the discontinuous material interfaces as well as the staggered nature of the Yee grid points. Any change in the resolution which shifts the location of the Yee grid points relative to the material interfaces will result in unpredictable changes to any computed quantities. The coordinates the Yee grid points can be obtained using a [field function](Field_Functions.md#coordinates-of-the-yee-grid) which can be useful for debugging.
When subpixel smoothing is disabled by either (1) setting `eps_averaging=False` in the [`Simulation`](Python_User_Interface.md#the-simulation-class) constructor or (2) using a [material function](Subpixel_Smoothing.md#enabling-averaging-for-material-function) (as is typical in the [adjoint solver](Python_Tutorials/Adjoint_Solver.md)), each electric field component $(E_x, E_y, E_z)$ in a given voxel is individually assigned a scalar permittivity (for isotropic materials) based on whatever the value of the permittivity is at that position in the [Yee grid](Yee_Lattice.md). This results in [staircasing artifacts](Subpixel_Smoothing.md) due to the discontinuous material interfaces as well as the staggered nature of the Yee grid points. Any change in the resolution which shifts the location of the Yee grid points relative to the material interfaces will result in unpredictable changes to any computed quantities. The coordinates the Yee grid points can be obtained using a [field function](Field_Functions.md#coordinates-of-the-yee-grid) which can be useful for debugging.

Subpixel Smoothing vs. Bilinear Interpolation
---------------------------------------------
Expand Down Expand Up @@ -190,7 +190,7 @@ There are three important items to note. (1) The pixel grid and prism representa

Since the interpolated pixel grid has already been smoothed to a continuous $\varepsilon(x,y)$ function, subpixel smoothing (which is not supported for `epsilon_input_file`) is not really necessary once the Yee grid resolution exceeds the input image resolution. This can be seen in the above plot: for Meep Yee grid resolutions of 80 (equal to the pixel grid resolution of the HDF5 file) and above, the changes in the results are much smaller than those at lower resolutions. Also, higher-order interpolation schemes are not necessary because the Yee discretization is already essentially equivalent to linear interpolation.

As a practical matter, increasing the Meep resolution beyond the resolution of a non-interpolated pixel grid is not physically meaningful because this is trying to resolve the individual pixels of an imported image. In the case of a pixel grid imported via `epsilon_input_file`, this is not an issue because the bilinear interpolation is performed automatically by default. However, no built-in interpolation is provided for a material function; it must be provided by the user (i.e., convolving the discontinuous material function with a smoothing kernel as demonstrated [below](#interpolation-techniques-for-material-function)). As a corollary, when designing structures using a pixel grid (e.g., as in the [adjoint solver](Python_Tutorials/AdjointSolver.md)), the pixel density of the degrees of freedom should typically be at least as big as the Meep resolution if not greater.
As a practical matter, increasing the Meep resolution beyond the resolution of a non-interpolated pixel grid is not physically meaningful because this is trying to resolve the individual pixels of an imported image. In the case of a pixel grid imported via `epsilon_input_file`, this is not an issue because the bilinear interpolation is performed automatically by default. However, no built-in interpolation is provided for a material function; it must be provided by the user (i.e., convolving the discontinuous material function with a smoothing kernel as demonstrated [below](#interpolation-techniques-for-material-function)). As a corollary, when designing structures using a pixel grid (e.g., as in the [adjoint solver](Python_Tutorials/Adjoint_Solver.md)), the pixel density of the degrees of freedom should typically be at least as big as the Meep resolution if not greater.

In terms of runtime performance, for structures based on a frequency-independent permittivity, anisotropic subpixel smoothing will generally consume more memory (due to the additional off-diagonal elements of the permittivity tensor) and have a slower time-stepping rate (again due to the anisotropic permittivity tensor which couples different field components during the field updates) than a simple scalar interpolation technique. The gains in accuracy from the anisotropic smoothing though should far outweigh this small performance penalty.

Expand Down
Loading

0 comments on commit 0b460cd

Please sign in to comment.